![]() training, simulation and collaboration in virtual reality in robotic surgical system
专利摘要:
a virtual reality system providing a virtual robotic surgical environment, and methods for using the virtual reality system are described in this document. Within the virtual reality system, several user modes allow different types of interactions between a user and the virtual robotic surgical environment. for example, a variation of a method to facilitate navigation of a virtual robotic surgical environment includes displaying a first-person perspective view of the virtual robotic surgical environment from a first vantage point, displaying a first window view of the robotic surgical environment virtual from a second observation point and display a second window view of the virtual robotic surgical environment from a third observation point. additionally, in response to a user input associating the first and second window views, a path between the second and third observation points can be generated sequentially by linking the first and second window views. 公开号:BR112019022739A2 申请号:R112019022739 申请日:2018-06-28 公开日:2020-05-19 发明作者:Fai Kin Siu Bernard;Mark Johnson Eric;Yu Haoran;Eduardo Garcia Kilroy Pablo 申请人:Verb Surgical Inc; IPC主号:
专利说明:
Descriptive Report of the Invention Patent for TRAINING, SIMULATION AND COLLABORATION IN VIRTUAL REALITY IN ROBOTIC SURGICAL SYSTEM. CROSS REFERENCE AND RELATED REQUESTS [001] This application claims priority for US Provisional Patent Application N Series 2 62 / 526,919, filed on June 29, 2017, which is incorporated by reference in its entirety TECHNICAL FIELD [002] This invention relates, in general, to the field of robotic surgery and more specifically, to systems and methods useful to provide virtual robotic surgical environments. BACKGROUND [003] Minimally invasive surgery (MIS), such as laparoscopic surgery, involves techniques intended to reduce tissue damage during a surgical procedure. For example, laparoscopic procedures typically involve making several small incisions in the patient (for example, in the abdomen), and introducing one or more surgical instruments (for example, an operating end, at least one camera, etc.) through the incision in the patient. The surgical procedures can then be performed using the surgical instruments introduced, with the aid of visualization provided by the camera. [004] Generally, MIS provides several benefits, such as reduced patient scarring, less pain for the patient, faster patient recovery periods, and less medical treatment costs associated with patient recovery. In some embodiments, MIS can be performed with robotic systems that include one or more robotic arms to manipulate surgical instruments based on commands from an operator. A robotic arm can, for example, support at its end Petition 870190110594, of 10/30/2019, p. 9/139 2/86 distal various devices such as surgical ends, imaging devices, cannula to provide access to the patient's body cavity and organs, etc. [005] Robotic surgical systems are usually complex systems performing complex procedures. As a result, a user (for example, surgeons) can often require significant training and experience to successfully operate a robotic surgical system. Such training and experience are advantageous for effectively planning the details of MIS procedures (for example, determining the number, location and orientation of robotic arms, determining the number and optical location of incisions, determining ideal types and sizes of surgical instruments, determining share orders in a procedure, etc.). [006] Additionally, the process of designing robotic surgical systems can also be complicated. For example, improvements in hardware (for example, robotic arms) are prototyped as physically and physically tested modalities. Improvements in software (for example, control algorithms for robotic arms) may also require physical modalities. Such prototyping and cyclic testing are generally cumulatively expensive and time-consuming. SUMMARY [007] Generally, a virtual reality system to provide a virtual robotic surgical environment can include a virtual reality processor (for example, a processor in a computer implementing instructions stored in memory) to generate a virtual robotic surgical environment, a video mounted head wearable by a user, and one or more hand controllers manipulated by the user to interact with the virtual robotic surgical environment. The virtual reality processor can, in some variations, be configured to generate a virtual robotic surgical environment based on Petition 870190110594, of 10/30/2019, p. 10/139 3/86 based on at least one predetermined configuration file describing a virtual component (for example, virtual robotic component) in the virtual environment. The head-mounted video can include an immersive video to display the virtual robotic surgical environment to the user (for example, with a first-person perspective view of the virtual environment). In some variations, the virtual reality system may additionally or alternatively include an external video to display the virtual robotic surgical environment. The immersive video and the external video, if both are present, can be synchronized to present the same or similar content. The virtual reality system can be configured to generate a virtual robotic surgical environment within which a user can navigate around a virtual operating room and interact with virtual objects via the head mounted video and / or handheld controllers. The virtual reality system (and variations of it, as further described in this document) can serve as a useful tool with respect to robotic surgery, in applications including, but not limited to training, simulation, and / or collaboration between the various people. [008] In some variations, a virtual reality system can interface with a real or real (non-virtual) operating room. The virtual reality system may allow the visualization of a robotic surgical environment, and may include a virtual reality processor configured to generate a virtual robotic surgical environment comprising at least one virtual robotic component, and at least one sensor in a robotic surgical environment. The sensor can be in communication with the virtual reality processor and configured to detect a condition of a robotic component corresponding to the virtual robotic component. The virtual reality processor is configured to receive the detected condition from the robotic component and modify the virtual robotic component based on Petition 870190110594, of 10/30/2019, p. 11/139 4/86 at least in part in the detected condition so that the virtual robotic component mimics the robotic component. [009] For example, a user can monitor a real robotic surgical procedure produced in a real operating room via a virtual reality system that interfaces with the real operating room (for example, the user can interact with a reality environment which is reflective of conditions in the actual operating room). The positions detected of the robotic components during a surgical procedure can be compared with their expected positions as determined from the surgical pre-planning in a virtual environment, so that deviations from the surgical plan can trigger a surgeon to perform adjustments to avoid collisions (for example, changing a position of a robotic arm, etc.). [0010] In some variations, the one or more sensors can be configured to detect characteristics or conditions of a robotic component such as position, orientation, pace or speed. As an illustrative example, the one or more sensors in the robotic surgical environment can be configured to detect the position and / or the orientation of a robotic component such as a robotic arm. The position and orientation of the robotic arm can be fed to the virtual reality processor, which moves or otherwise modifies a virtual robotic arm corresponding to the real robotic arm. Thus, a user viewing the virtual robotic surgical environment can view the adjusted virtual robotic arm. As another illustrative example, one or more sensors can be configured to detect a collision involving the robotic component in the robotic surgical environment, and the system can provide an alarm notifying the user of the occurrence of the collision. [0011] Within the virtual reality system, several user modes allow different types of interaction between a user and the Petition 870190110594, of 10/30/2019, p. 12/139 5/86 virtual robotic surgical environment. For example, a variation of a method to facilitate navigation of a virtual robotic surgical environment includes displaying a first-person perspective view of the virtual robotic surgical environment from a first observation point within the virtual robotic surgical environment, displaying a first view window view of the virtual robotic surgical environment from a second observation point and display a second window view of the virtual robotic surgical environment from a third observation point. The first and second window views can be displayed in the respective regions of the displayed first person perspective view. In addition, the method may include, in response to a user input, associating the first and second window views, sequentially linking the first and second window views to generate a trajectory between the second and third observation points. Window views of the virtual robotic surgical environment can be displayed at different scale factors (for example, zoom, levels), and can offer views of the virtual environment from any suitable observation point in the virtual environment, such as within the patient. virtual, on top of the virtual patient, etc. [0012] In response to a user input indicating selection of a particular window view, the method may include displaying a new perspective view in first person of the virtual environment from the observation point of the selected window view. In other words, window views can, for example, operate as portals facilitating transport between different observation points within the virtual environment. [0013] As another example of user interaction between a user and the virtual robotic surgical environment, a variation of a method to facilitate visualization of a virtual robotic surgical environment includes displaying a perspective view in first person of the Petition 870190110594, of 10/30/2019, p. 13/139 6/86 virtual robotic surgical environment from a first observation point within the virtual robotic surgical environment, receive user input indicating the placement of a virtual camera in a second observation point within the virtual robotic surgical environment other than the first point of observation, generate a perspective view of the virtual camera of the virtual robotic surgical environment from the second observation point, and display the perspective view of the virtual camera in a region of the first person perspective view displayed. The camera view can, for example, provide a supplementary view of the virtual environment for the user that allows the user to monitor various aspects of the environment simultaneously while maintaining primary focus in a first view in main perspective. In some variations, the method may also include receiving user input indicating a selection of a type of virtual camera (for example, a cinema camera configured to be placed outside a virtual patient, an endoscopic camera configured to be placed inside a a virtual patient, a 360-degree camera, etc.) and display a virtual model of the type of virtual camera selected at the second observation point within the virtual robotic surgical environment. Other examples of user interactions with the virtual environment are described in this document. [0014] In another variation of a virtual reality system, the virtual reality system may resemble a robotic surgical environment in which a user can operate both a robotically controlled surgical instrument using a hand controller and a manual laparoscopic surgical instrument (for example, while adjacent to a patient's table, or on the bed). For example, a virtual reality system to simulate a robotic surgical environment may include a virtual reality controller configured to generate a virtual robotic surgical environment comprising at least Petition 870190110594, of 10/30/2019, p. 14/139 7/86 us a virtual robotic arm and at least one virtual laparoscopic hand tool, a first hand device communicatively coupled with the virtual reality controller to manipulate at least one virtual robotic arm in the virtual robotic surgical environment, and a second hand device comprising a manual part and a tool component representative of at least a part of a manual laparoscopic tool, wherein the second manual device is communicatively coupled with the virtual reality controller to manipulate at least one virtual manual laparoscopic tool in the virtual robotic surgical environment . For example, in some variations, the tool component may include a tool shaft and a shaft adapter for coupling the tool shaft with the manual part of the second hand device (for example, the shaft adapter may include fasteners). The second hand device can be a functional manual laparoscopic tool or a model (for example, faithful copy or generic version) of a manual laparoscopic tool, whose movements (for example, in the tool component) can be mapped by the virtual reality controller to correspond to the movements of the virtual manual laparoscopic tool. [0015] The second manual device can be modular. For example, the tool component can be removable from the manual part of the second hand device, thereby allowing the second hand device to function as a manual laparoscopic device (to control a virtual hand laparoscopic tool) when the tool component is connected with the manual part, as well as a non-laparoscopic manual device (for example, to control a robotically controlled tool or robotic arm) when the tool component is decoupled from the manual part. In some variations, the manual part of the second Petition 870190110594, of 10/30/2019, p. 15/139 8/86 handheld device can be substantially similar to the first handheld device. [0016] The manual part of the second manual device can include an interactive component, such as a trigger or button, which acts a function of the virtual manual laparoscopic tool in response to the engagement of the interactive component by a user. For example, a trigger on the manual part of the second manual device can be mapped to a virtual trigger on the virtual manual laparoscopic tool. As an illustrative example, in a variation in which the virtual manual laparoscopic tool is a virtual manual laparoscopic stapler, a trigger in the manual part can be mapped to activate a virtual stapler in the virtual environment. Other aspects of the system can also approximate the configuration of the virtual tool in the virtual environment. For example, the virtual reality system can also include a patient simulator (for example, simulated patient's abdomen) including a cannula configured to receive at least a part of the tool component of the second hand device, to further simulate the feeling of a manual laparoscopic tool. [0017] Generally, a computer-implemented method to operate a virtual robotic surgical environment may include generating a virtual robotic surgical environment using a client application, where the virtual robotic surgical environment includes at least one virtual robotic component, and passing information between two software applications to perform the movements of the virtual robotic component. For example, in response to a user command to move at least one virtual robotic component in the virtual robotic surgical environment, the method may include passing on condition information with respect to at least one virtual robotic component from the client application to a server application, generate a Petition 870190110594, of 10/30/2019, p. 16/139 9/86 actuation command based on user command and condition information using the server application, passing the actuation command from the server application to the client application, and moving at least one virtual robotic component based on the command of performance. The client application and the server application can run on a shared processor device, or on separate processor devices. [0018] In some variations, passing the condition information and / or passing the actuation command may include activating an application programming interface (API) to support communication between the client and server applications. The PAI can include one or more definitions of data structures for virtual robotic components and for other virtual components in the virtual environment. For example, the API can include multiple data structures for a virtual robotic arm, a virtual robotic arm segment (eg, link), a virtual patient table, a virtual cannula, and / or a virtual surgical instrument. As another example, PAI can include a data structure for a virtual point of contact to allow manipulation of at least one virtual robotic component (for example, virtual robotic arm) or another virtual component. [0019] For example, the method may include passing condition information with respect to a virtual robotic arm, such as position and orientation (for example, position of the virtual robotic arm). The client application can pass such condition information to the server application, as a result of which the server application can generate an actuation command based on kinematics associated with the virtual robotic arm. [0020] As described in this document, there are several applications and uses for the virtual reality system. In one variation, the virtual reality system can be used to streamline the R&D cycle Petition 870190110594, of 10/30/2019, p. 17/139 10/86 before the development of a robotic surgical system, such as allowing simulation of potential design without the time and significant expense of physical prototypes. For example, a method for designing a robotic surgical system may include generating a virtual model of a robotic surgical system, testing the virtual model of the robotic surgical system in a virtual operating room environment, modifying the virtual model of the robotic surgical system based on the test, and generate a real model of the robotic surgical system based on the modified virtual model. Testing the virtual model may, for example, involve performing a virtual surgical procedure using a virtual robotic arm and a virtual surgical instrument supported by the virtual robotic arm, such as through the client application described in this document. During a test, the system can detect one or more collision events involving the virtual robotic arm, which can, for example, activate a modification for the virtual model (for example, modifying the virtual robotic arm in link length, diameter, etc.) in response to the detected collision event. Additional testing of the modified virtual model can then be performed, to thereby confirm that the modification reduced the likelihood of the collision event running during the virtual surgical procedure. Consequently, testing and modifying virtual surgical system designs in a virtual environment can be used to identify problems before testing physical prototypes of the project. [0021] In another variation, the virtual reality system can be used to test a control mode for a robotic surgical component. For example, a method for testing a control mode for a robotic surgical component may include generating a virtual robotic surgical environment, the virtual robotic surgical environment comprising at least one virtual robotic component corresponding to the robotic surgical component, emulating a control mode Petition 870190110594, of 10/30/2019, p. 18/139 11/86 trolley for the robotic surgical component in the virtual robotic surgical environment, and, in response to a user command to move the at least one virtual robotic component, move the at least one virtual robotic component according to the emulated control mode . In some variations, moving the virtual robotic component may include passing condition information with respect to at least one virtual robotic component from a first application (for example, virtual operating environment application) to a second application (for example, kinematics), generate an actuation command based on the condition information and emulated control mode, pass the actuation command from the second application to the first application, and move at least one virtual robotic component in the virtual robotic surgical environment based on in the actuation command. [0022] For example, the control mode to be tested can be a trajectory according to the control mode for a robotic arm. When following the trajectory, the movement of the robotic arm can be programmed and then emulated using the virtual reality system. Consequently, when the system is used to emulate a control mode following the trajectory, the actuation command generated by a kinematics application can include generating a command actuated for each of the various virtual junctions in the virtual robotic arm. This set of actuated commands can be implemented by a virtual operating environment application to move the virtual robotic arm in the virtual environment, thus allowing testing in relation to the collision, volume or movement workspace, etc. [0023] Other variations and examples of virtual reality systems, their modes and interactions with the user, and applications and uses of the virtual reality system, are described in further details in this document. Petition 870190110594, of 10/30/2019, p. 19/139 12/86 BRIEF DESCRIPTION OF THE DRAWINGS [0024] FIGURE 1A represents an example of an operating environment layout with a robotic surgical system and a surgeon's console. FIGURE 1B is a schematic illustration of an illustrative variation of a robotic arm manipulator, the tool controller, and the cannula with a surgical tool. [0025] FIGURE 2A is a schematic illustration of a variation of a virtual reality system. FIGURE 2B is a schematic illustration of an immersive video to display an immersive view of a virtual reality environment. [0026] FIGURE 3 is a schematic illustration of components of a virtual reality system. [0027] FIGURE 4A is an illustrative structure for communication between a virtual reality environment application and a kinematics application for use in a virtual reality system. FIGURES 4B and 4C are tables summarizing structures and illustrative data fields for an application program interface for communication between the virtual reality environment application and the kinematics application. [0028] FIGURE 5A is a schematic illustration of another variation of a virtual reality system including an illustrative variation of a manual laparoscopic controller. FIGURE 5B is a schematic illustration of an immersive video to display an immersive view of a virtual reality environment including a virtual manual laparoscopic tool controlled by the manual laparoscopic controller. [0029] FIGURE 6A is a perspective view of an illustrative variation of a manual laparoscopic controller. FIGURE 6B is a schematic illustration of a virtual manual laparoscopic tool overlaid on the part of the manual laparoscopic controller shown. Petition 870190110594, of 10/30/2019, p. 20/139 13/86 sitting in FIGURE 6A. 6C to 6E are a side view, a detailed partial perspective view, and a partial cross-sectional view, respectively, of the manual laparoscopic controller shown in FIGURE 6A. [0030] FIGURE 7 is a schematic illustration of another variation of a virtual reality system interfacing with a robotic surgical environment. [0031] FIGURE 8 is a schematic illustration of a menu displayed for selecting one or more modes from a variation of a virtual reality system. [0032] FIGURES 9A to 9C are schematic illustrations of a virtual robotic surgical environment with illustrative portals. [0033] FIGURES 10A and 10B are schematic illustrations of an illustrative virtual robotic surgical environment seen in a flight mode. FIGURE 10C is a schematic illustration of a transition region to modify a view of the illustrative virtual robotic surgical environment in flight mode. [0034] FIGURE 11 is a schematic illustration of a virtual robotic surgical environment seen from an observation point providing a view of a doll's house from a virtual operating room. [0035] FIGURE 12 is a schematic illustration of a view of a virtual robotic surgical environment with a transparent front display for displaying supplementary views. [0036] FIGURE 13 is a schematic illustration of a video provided by a variation of a virtual reality system operating in a virtual command station mode. [0037] FIGURE 14 is a flow chart of an illustrative variation of a method for operating a user mode menu for selecting user modes in a virtual reality system. Petition 870190110594, of 10/30/2019, p. 21/139 14/86 [0038] FIGURE 15 is a flow chart of an illustrative variation of a method for operating in a rotating mode of view of the environment in a virtual reality system. [0039] FIGURE 16 is a flow chart of an illustrative variation of a method for operating a user mode allowing instantaneous points in a virtual environment. DETAILED DESCRIPTION [0040] Examples of various aspects and variations of the invention are described in this document and illustrated in the accompanying drawings. The following description is not intended to limit the invention to those modalities, but rather to allow those skilled in the art to produce and use that invention. Overview of the robotic surgical system [0041] An illustrative robotic surgical system and surgical environment are illustrated in FIGURE 1A. As shown, a robotic surgical system 150 can include one or more robotic arms 160 located on a surgical platform (for example, table, bed, etc.), where actuators or surgical tools are connected with the distal ends of the robotic arms 160 for perform a surgical procedure. For example, the robotic surgical system 150 may include, as shown in the illustrative scheme of FIGURE 1B, at least one robotic arm 160 coupled with a surgical platform, and a tool controller 170 generally connected with a distal end of the robotic arm 160. One cannula 100 coupled to the end of the tool controller 170 can receive and orient a surgical instrument 190 (e.g., actuator, camera, etc.). In addition, the robotic arm 160 can include several links that are actuated in order to position and orient the tool controller 170, which actualizes the surgical instrument 190. The robotic surgical system can also include a control tower 152 (for example, Petition 870190110594, of 10/30/2019, p. 22/139 15/86 example, including a power supply, computing equipment, etc.) and / or other equipment suitable to support functionality of the robotic components. [0042] In some variations, a user (such as a surgeon or other operator) may use a user console 100 to remotely manipulate robotic arms 160 and / or surgical instruments (eg, tele-operation). User console 100 can be located in the same procedure room as robotic system 150, as shown in FIGURE 1A. In other embodiments, user console 100 may be located in an adjacent or nearby room, or tele-operated from a remote location in a different building, city, or country. In one example, user console 100 comprises a seat 110, controls operated by feet 120, one or more user interface devices 122, and at least one user video 130 configured to display, for example, a view of the surgical site within a patient. For example, as shown on the illustrative user console shown in FIGURE 1A, a user located on seat 10 and viewing user video 130 can manipulate the controls operated by the feet 120 and / or manual user interface devices to remotely control robotic arms 160 and / or surgical instruments. [0043] In some variations, a user can operate the robotic surgical system 150 in an over-bed mode (OTB), in which the user is on the patient's side and simultaneously manipulating a robotically driven tool controller / actuator connected with it (for example, with a manual user interface device 122 held in one hand) and a manual laparoscopic tool. For example, the left hand of the user may be manipulating a manual user interface device 122 Petition 870190110594, of 10/30/2019, p. 23/139 16/86 to control a robotic surgical component, while the user's right hand may be manipulating a manual laparoscopic tool. Thus, in these variations, the user can perform both robot-assisted MIS and manual laparoscopic techniques in relation to a patient. [0044] During an illustrative procedure or surgery, the patient is prepared and dressed in a sterile mode, and anesthesia is performed. Initial access to the surgical site can be performed manually with the robotic system 150 in a retracted or retracted configuration to facilitate access to the surgical site. Once access is complete, initial positioning and / or preparation of the robotic system can be performed. During the surgical procedure, a surgeon or other user on user console 100 can use the controls operated by the feet 120 and / or the user interface devices 122 to manipulate various actuators and / or imaging systems to perform the procedure . Manual assistance can also be provided on the procedure table by a team in sterile clothing, who can perform tasks including, but not limited to, retracting organs, or perform manual repositioning or tool change involving one or more robotic arms 160. Non-sterile staff too may be present to assist the surgeon in control console 100. When the procedure or surgery is complete, the robotic system 150 and / or user console 100 can be configured or set up in a state to facilitate one or more post-operative procedures, including , but not limited to cleaning and / or sterilizing the robotic system 150, and / or entering and printing a health care record, whether electronic or hard copy, as via user console 100. [0045] In FIGURE 1A, robotic arms 160 are shown Petition 870190110594, of 10/30/2019, p. 24/139 17/86 with a table-mounted system, but in other modalities, the robotic arms can be mounted on a cart, on the ceiling, or on the side wall, or on other suitable support surfaces. Communication between the robotic system 150, user console 100 and any other videos can be via wired and / or wireless connections. Any connections using wires can optionally be embedded in the floor and / or the walls or ceiling. The communication between the user console 100 and the robotic system 150 can be wired and / or wireless, and can be proprietary and / or performed using any of several data communication protocols. In still other variations, the user console 100 does not include an integrated video 130, but it can provide a video output that can be connected with the output for one or more generic videos, including remote videos accessible via the internet or network. The video output or feed can also be encrypted to ensure privacy, and all or parts of the video output can be saved to an electronic medical record server or system. [0046] In other examples, additional user consoles 100 may be provided, for example, to control additional surgical instruments, and / or to take control of one or more surgical instruments on a main user console. This will allow, for example, a surgeon to assume or illustrate a technique during a surgical procedure with medical students and doctors in training, or to assist during complex surgeries by requiring multiple surgeons to act simultaneously or in a coordinated manner. Virtual reality system [0047] A virtual reality system to provide a virtual robotic surgical environment is described in this document. As shown in FIGURE 2A, a virtual reality system 200 can Petition 870190110594, of 10/30/2019, p. 25/139 18/86 include a virtual reality processor 210 (for example, a processor on a computer implementing instructions stored in memory) to generate a virtual robotic surgical environment, a head-mounted video 220 wearable by a U user, and one or more controllers 230 manuals manipulable by user U to interact with the virtual robotic surgical environment. As shown in FIGURE 2B, the head mounted video 220 may include an immersive video 222 for displaying the virtual robotic surgical environment to user U (for example, with a first-person perspective view of the virtual environment). The immersive video can, for example, be a stereoscopic video provided by glasses assemblies. In some variations, the virtual reality system 200 may additionally or alternatively include an external video 240 to display the virtual robotic surgical environment. Immersive video 222 and external video 240, if both are present, can be synchronized to present the same content or similar content. [0048] As described in further details in this document, the virtual reality system (and variations thereof, as further described in this document) can serve as a useful tool with respect to robotic surgery, in applications including, but not limited to training, simulation, and / or collaboration between several people. More specific examples of applications and uses of the virtual reality system are described in this document. [0049] Generally, the virtual reality processor is configured to generate a virtual robotic surgical environment within which a user can navigate around a virtual operating room and interact with virtual objects via the head mounted video and / or manual controllers . For example, a virtual robotic surgical system can be integrated into a virtual operating room, with one or more virtual robotic components having three-dimensional meshes and Petition 870190110594, of 10/30/2019, p. 26/139 19/86 selected characteristics (for example, kinematic dimensions and restrictions of virtual robotic arms and / or surgical tools, number of their disposition, etc.). Other virtual objects, such as virtual control towers or other virtual equipment representing equipment supporting the robotic surgical system, a virtual patient, a virtual table or other surface for the patient, virtual measurement team, a virtual user console, etc., also can be integrated into the virtual reality operating room. [0050] In some variations, head mounted video 220 and / or manual controllers 230 can be modified versions of those included in any suitable virtual reality hardware system that is commercially available for applications including virtual and augmented reality environments (for example, example, for gaming and / or military purposes) and are familiar to those skilled in the art. For example, head mounted video 220 and / or manual controllers 230 can be modified to allow interaction by a user with a virtual robotic surgical environment (for example, a manual controller 230 can be modified as described below to operate as a controller laparoscopic procedure). The hand controller may include, for example, a transported device (for example, stick, remote device, etc.) and / or clothing worn in the user's hand (for example, gloves, rings, bracelets, etc.) and including sensors and / or configured to cooperate with external sensors to thereby provide tracking of the user's hand (hands), individual finger (s), pulse (s), etc. Other suitable controllers may additionally or alternatively be used (for example, gloves configured to provide tracking of the user's arm (s)). [0051] Generally, a U user can wear the head mounted video 220 and carry (or use) at least one hand controller Petition 870190110594, of 10/30/2019, p. 27/139 20/86 230 while he or she moves around a physical workspace, such as a training room. While using the head mounted video 220, the user can see an immersive first person perspective view of the virtual robotic surgical environment generated by the virtual reality processor 210 and displayed in the immersive video 222. As shown in FIGURE 2B, the view shown in Immersive video 222 may include one or more graphical representations 203 'of hand controllers (e.g., virtual models of hand controllers, virtual models of human hands in place of hand controllers or hand held controllers, etc.). A similar first-person perspective view can be displayed on an external video 240 (for example, for assistants, mentors, or other people suitable for viewing). As the user moves and navigates within the workspace, the virtual reality processor 210 can change the view of the virtual robotic surgical environment displayed in the immersive video 222 based at least in part on the location and orientation of the head-mounted video ( and, consequently, in the user's location and orientation), thereby allowing the user to feel as if he or she is exploring and moving within the virtual robotic surgical environment. [0052] Additionally, the user can also interact with the virtual robotic surgical environment by moving and / or manipulating the 230 hand controllers. For example, the 230 hand controllers can include one or more buttons, actuators, touch sensitive components, wheels scroll keys, keys, and / or other suitable interactive components that the user can manipulate to interact with the virtual environment. As the user moves the manual controllers 230, the virtual reality processor 210 can move the graphic representation 230 ’of the manual controllers (a cursor or other Petition 870190110594, of 10/30/2019, p. 28/139 21/86 representative icon) within the virtual robotic surgical environment. In addition, employing one or more interactive components of the handheld controllers 230 can allow the user to manipulate aspects of the virtual environment. For example, the user can move a hand controller 230 until the hand controller 230 'graphical representation is close to a virtual contact point (for example, selectable location) on a virtual robotic arm in the environment, employ a trigger or other component interactive on the hand controller 230 to select the virtual contact point, and then move the hand controller 230 while employing the trigger to drag or otherwise manipulate the virtual robotic arm via the virtual contact point. Further examples of user interactions with the virtual robotic surgical environment are described below in additional details. [0053] In some variations, the virtual reality system may employ other user senses. For example, the virtual reality system may include one or more audio devices (for example, headphones for the user, speakers, etc.) to relay audio feedback to the user. As another example, the virtual reality system can provide tactile feedback, such as vibration, on one or more handheld controllers 230, on the head mounted video 220, or on other haptic devices coming into contact with the user (for example, gloves, bracelets, etc.). Virtual reality processor [0054] The virtual reality processor 210 can be configured to generate a virtual robotic surgical environment within which a user can navigate around a virtual operating room and interact with virtual objects. A general scheme illustrating an illustrative interaction between the virtual reality processor and at least some components of the virtual reality system is presented in Petition 870190110594, of 10/30/2019, p. 29/139 22/86 FIGURE 3. [0055] In some variations, the virtual reality processor 210 may be in communication with hardware components such as head mounted video 220, and / or manual controllers 230. For example, the virtual reality processor 210 may receive input from sensors in the head-mounted video 220 to determine the location and orientation of the user within the physical workspace, which can be used to generate an appropriate corresponding first-person perspective view of the virtual environment for display in the mounted video head 220 for the user. As another example, virtual reality control 210 can be input from sensors on hand controllers 230 to determine the location and orientation of hand controllers 230, which can be used to generate suitable graphical representations of hand controllers 230 for display on head mounted video 220 for the user, as well as conversion of the user input (to interact with the virtual environment) into corresponding modifications of the virtual robotic surgical environment. The virtual reality processor 210 can be coupled with an external video 240 (for example, a monitor screen) that is visible to the user in a non-immersive manner and / or by other people such as assistants or mentors, who may wish view user interactions with the virtual environment. [0056] In some variations, the virtual reality processor 210 (or several processing machines) can be configured to run one or more software applications to generate the virtual robotic surgical environment. For example, as shown in FIGURE 4, the virtual reality processor 210 can use at least two software applications, including a virtual operating environment application 410 and a kinematics 420 application. Petition 870190110594, of 10/30/2019, p. 30/139 23/86 virtual operating environment and the kinematics application can communicate via a client - server model. For example, the virtual operating environment application can operate as a client, while the kinematics application can operate as a server. The virtual operating environment application 410 and the kinematics application 420 can be run on the same processing machine, or on separate processing machines coupled via a computer network (for example, the client or server can be a remote device, or machines can be on a local computer network). Additionally, it should be understood that in other variations, the virtual operating environment application 410 and / or the kinematics application 420 can interface with other software components. In some variations, the virtual operating environment application 410 and the kinematics application 520 can invoke one or more application program interfaces (APIs), which define the way in which applications communicate with each other. [0057] The virtual operating environment 410 may allow a description or definition of the virtual operating room environment (for example, the operating room, operating table, control tower or other components, user console, robotic arms, connections table adapters coupling robotic arms to the operating table, etc.). At least some descriptions of the virtual operating room environment can be saved (for example, in a model 202 virtual reality component database), and provided to the processor as configuration files. For example, in some variations, as shown in FIGURE 3, the virtual reality processor (such as through the virtual operating environment application 410 described above) may be in communication with the virtual reality component database. Petition 870190110594, of 10/30/2019, p. 31/139 24/86 delo 202 (for example, stored on a server, local or remote hard drive, or other suitable memory). The model 202 virtual reality component database can store one or more configuration files describing virtual components of the virtual robotic surgical environment. For example, database 202 can store files describing different types of operating rooms (for example, varying in room format or room dimensions), operating tables or other surfaces on which a patient rests (for example, varying in size, height, surfaces, material construction, etc.), control towers (for example, varying in size and shape), user console (for example, varying in user seat design), robotic arms (for example , design of arm connections and joints, number and arrangement of the arm, number and location of virtual contact points on the arms, etc.), table adapter connections coupling robotic arms with an operating table (for example, design of adapter connections of the same and of the joints, number and arrangement of them, etc.), types of patient (for example, varying in sex, age, weight, height, waist measurement, etc.) and / or medical staff (for example, representation generic graphics of people, graphic representations of real medical staff, etc.). As a specific example, a configuration file in the Unified Robot Description Format (URDF) can store a configuration of a particular robotic arm, including definitions or values for fields such as number of arm connections, number of arm joints connecting the arm connections, length of each arm connection, diameter or circumference measurement of each arm connection, mass of each arm connection, type of arm joint (eg roll, pitch and yaw, etc.), etc. Additionally, kinematic constraints can be loaded as a wrap over a component Petition 870190110594, of 10/30/2019, p. 32/139 25/86 virtual robotic (for example, arm) to further define the kinematic behavior of the virtual robotic component. In other variations, the virtual reality processor 210 can receive any suitable descriptions of virtual components to load and generate in the virtual robotic surgical environment. As a result, the virtual reality processor 210 can receive and use different combinations of configuration files and / or other virtual component descriptions to generate virtual robotic surgical environments. [0058] In some variations, as shown in FIGURE 3, the virtual reality processor 210 may still or alternatively be in communication with a database of patient records 204, which can store specific patient information. Such patient-specific information may include, for example, patient image data (for example, x-ray, IMR, CT, ultrasound, etc.), medical history, and / or patient measurements (for example, age, weight, height, etc.), although other suitable patient-specific information may additionally or alternatively be stored in the patient record database 204. When generating the virtual robotic surgical environment, the virtual reality processor 210 can receive specific patient information from the patient records database 204 and integrate at least some of the information received in the virtual reality environment. For example, a realistic representation of the patient's body or other tissue can be generated and incorporated into the virtual reality environment (for example, a 3D model generated from a combined stack of 2D images, such as MRI images), the which can be useful, for example, to determine the desirable arrangement of robotic arms around the patient, ideal placement of the door, etc., for a particular patient, as further described in this document. As another example, patient image data can be overlaid on Petition 870190110594, of 10/30/2019, p. 33/139 26/86 a part of the user's field of view of the virtual environment (for example, superimposing an ultrasound image of a patient tissue over the virtual patient's tissue). [0059] In some variations, the virtual reality processor 210 may incorporate one or more kinematic algorithms via the kinematics application 420 to at least partially describe the behavior of one or more components of the virtual robotic system in the virtual robotic surgical environment. For example, one or more algorithms can define how a virtual robotic arm responds to interactions with the user (for example. Moving the virtual robotic arm by selecting and manipulating a contact point on the virtual robotic arm), or as a virtual robotic arm operates in a selected control mode. Other kinematic algorithms, such as those defining the operation of a virtual tool controller, a virtual patient table, or other virtual components, can still or alternatively be incorporated into the virtual environment. By incorporating in the virtual environment one or more kinematic algorithms that accurately describe the behavior of a real (true) robotic surgical system, the virtual reality processor 210 can allow the virtual robotic surgical system to function precisely or realistically compared to a physical implementation of a true robotic surgical system. For example, the virtual reality processor 210 may incorporate at least one control algorithm that represents or corresponds to one or more control modes defining movements of a robotic component (e.g., arm) in a real robotic surgical system. [0060] For example, the kinematics application 420 can allow a description or definition of one or more virtual control modes, such as for virtual robotic arms or for other virtual components in the virtual environment. Generally, for example, a way of Petition 870190110594, of 10/30/2019, p. 34/139 27/86 control for a virtual robotic arm can correspond to a function block that allows the virtual robotic arm to perform or perform a particular task. For example, as shown in FIGURE 4, a control system 430 can include several virtual control modes 432, 434, 436, etc., governing the performance of at least one joint on the virtual robotic arm. The virtual control modes 432, 434, 436, etc., can include at least one primitive mode (which governs the underlying behavior for actuation of at least one junction) and / or at least one user mode (which governs behavior at a higher level, specific to the task and can use one or more primitive modes). In some variations, a user can activate a virtual point of contact surface of a virtual robotic arm or other virtual object, thereby triggering a particular control mode (for example, via a state machine or another controller). In some variations, a user can directly select a particular control mode through, for example, a menu displayed in the first person perspective view of the virtual environment. [0061] Examples of primitive virtual control modes include, but are not limited to, a join command mode (which allows a user to directly act on a single virtual joint individually, and / or multiple virtual joins collectively), a join mode gravity compensation (in which the virtual robotic arm keeps itself in a particular position, with particular position and orientation of connections and joints, without moving downwards due to simulated gravity), and trajectory tracking mode (in which the arm virtual robot can move to follow a sequence of one or more Cartesian trajectory commands or other commands). Examples of user modes that incorporate one or more primitive control modes include, but are not limited to, mo Petition 870190110594, of 10/30/2019, p. 35/139 28/86 of inactivity (in which the virtual robotic arm can remain in a current or preset position awaiting additional commands), a configuration mode (in which the virtual robotic arm can switch to a pre-established configured position or a template position predetermined for a particular type of surgical procedure), and a fixed mode (in which the robotic arm facilitates the process in which the user connects the robotic arm with a part, such as with gravity compensation, etc.). [0062] Generally, the virtual operating environment application 410 and the kinematics application 420 can communicate with each other via a predefined communication protocol, such as an application program interface (APIs) that organizes information (for example, condition or other characteristics) of virtual objects and other aspects of the virtual environment. For example, the API may include data structures that specify how to communicate information about virtual objects such as a virtual robotic spleen (in whole and / or based on segment by segment), a virtual table, an adapted from the virtual table connecting a virtual arm with the virtual table, a virtual cannula, a virtual tool, a virtual contact point to facilitate user interaction with the virtual environment, user input system, manual control devices, etc. In addition, the API can include one or more data structures that specify how to communicate information about events in the virtual environment (for example, a collision event between two virtual entities) or other aspects relating to the virtual environment (for example, frame reference to display the virtual environment, control system structure, etc.). Illustrative data structures and illustrative fields to contain your information are listed and described in FIGURE 4B and FIGURE 4C, although it should be understood that other variations of the API may include any suitable types, Petition 870190110594, of 10/30/2019, p. 36/139 29/86 names and numbers of data structures and illustrative field structures. [0063] In some variations, as generally illustrated schematically in FIGURE 4A, the virtual operating environment application 410 passes condition information to the kinematic application 420, and the kinematic application 420 passes commands to the virtual operating environment application 410 via the API, in which commands are generated based on condition information and subsequently used by the virtual reality processor 210 to generate changes in the virtual robotic surgical environment. For example, the method for incorporating one or more kinematic algorithms into a virtual robotic surgical environment for controlling a virtual robotic arm may include passing on condition information regarding at least part of the virtual robotic arm from the operating environment application. virtual 410 for the kinematics application 420, by means of an algorithm determining an actuation command to act at least one virtual joint of the virtual robotic arm, and passing the actuation command from the kinematics application 420 to the operating environment application virtual 410. The virtual reality processor 210 can subsequently move the virtual robotic arm according to the actuation command. [0064] As an illustrative example to control a virtual robotic arm, a gravity compensation control mode for a virtual robotic arm can be activated, thereby requiring one or more virtual join actuation commands in order to neutralize the forces of gravity simulated at the virtual junctions in the virtual robotic arm. The virtual operating environment application 410 can pass to the kinematics application 420 relevant condition information with respect to the virtual robotic arm (eg position of at least part of the virtual robotic arm, table position Petition 870190110594, of 10/30/2019, p. 37/139 30/86 of virtual patient on which the virtual robotic arm is mounted, position of a virtual contact point that the user may have to manipulate to move the virtual robotic arm, angles of the junction between adjacent connections of the virtual arm) and information condition (for example, simulated gravitational force direction in the virtual robotic arm). Based on the condition information received from the virtual operating environment application 410 and the known kinematic and / or dynamic properties of the virtual robotic arm and / or the virtual tool unit connected with the virtual robotic arm (for example, known from from a configuration file, etc.), the 430 control system can algorithmically determine which force acted on each virtual junction is required to compensate for the simulated gravitational force acting on the virtual junction. For example, the 430 control system can use a forward kinematics algorithm, an inverse algorithm, or any suitable algorithm. Once the force command actuated for each relevant virtual joint of the virtual robotic arm is determined, the kinematics application 420 can send the force commands to the virtual operating environment application 410. The virtual reality processor can subsequently act as virtual joints of the virtual robotic arm according to the force commands, thereby causing the virtual robotic arm to be viewed as maintaining its current position independent of the simulated gravitational force in the virtual environment (for example, instead of falling or collapsing under the force simulated gravitational). [0065] Another example of controlling a virtual robotic arm is tracking the trajectory for a robotic arm. In tracking the trajectory, the movement of the robotic arm can be programmed and then emulated using the virtual reality system. Consequently, when the system is used to emulate a mode Petition 870190110594, of 10/30/2019, p. 38/139 31/86 of trajectory planning control, the actuation command generated by a kinematics application can include generating a command actuated for each of the various virtual junctions in the virtual robotic arm. This set of actuated commands can be implemented by a virtual operating environment application to move the virtual robotic arm in the virtual environment, thus allowing testing of crash, volume or movement workspace, etc. [0066] Other virtual control algorithms for the virtual robotic arm and / or for other virtual components (for example, adapter connections of the virtual table coupling the virtual robotic arm with a virtual operating table) can be implemented via similar communication between the application virtual operating environment 410 and the kinematics application 420. [0067] Although the virtual reality processor 210 is generally referred to in this document as a single processor, it should be understood that in some variations, several processors can be used to run the processors described in this document. The one or more processors may include, for example, a processor for a general purpose computer, a special purpose computer or controller, or another programmable data processing device or component, etc. Generally, one or more processors can be configured to execute instructions stored on any suitable computer-readable media. Computer-readable media can include, for example, magnetic media, optical media, magnetic-optical media and hardware devices that are specially configured to store and execute program code, such as application-specific integrated circuits (ASICs), logic devices (PLDs), ROM and RAM devices, flash memory, EEPROMs, optical devices (for example, CD or DVD), hard drives, disk drives Petition 870190110594, of 10/30/2019, p. 39/139 32/86 flexible, or any suitable device. Examples of computer program code include machine code, as produced by a compiler, and files containing high-level code that are executed by a computer using an interpreter. For example, a variation can be implemented using C ++, JAVA, or another suitable object-oriented programming language and development tools. As another example, another variation can be implemented in physically recorded circuit systems instead, or in combination with machine-executable software instructions. Head mounted video and manual controllers [0068] As shown in FIGURE 2A, a U user can use a head mounted video 220 and / or hold one or more hand controllers 230. The head mounted video 220 and hand controllers 230 can generally allow a user to browse and / or interact with the virtual robotic surgical environment generated by the virtual reality processor 210. The video mounted on the head 220 and / or the manual controllers 230 can communicate signals to the virtual reality processor 210 via a wired or unused connection of wires. [0069] In some variations, the video mounted on the head 220 and / or the manual controllers 230 can be modified versions of those included in any suitable virtual reality hardware system that is commercially available for applications including virtual and augmented reality environments. For example, head mounted video 220 and / or manual controllers 230 can be modified to allow user interaction with a virtual robotic surgical environment (for example, a manual controller 230 can be modified as described below to operate as a controller laparoscopic procedure). In some variations, the system Petition 870190110594, of 10/30/2019, p. 40/139 33/86 virtual reality can also include one or more tracking emitters 212 that emit infrared light in a workspace for user U. Tracking emitters 212 can, for example, be mounted on a wall, ceiling, furniture, or other suitable mounting surface. The sensors can be coupled with surfaces facing away from the video mounted on the head 220 and / or from the manual controllers 230 to detect the emitted infrared light. Based on the location of any sensors that detect the emitted light and when those sensors detect the emitted light after the light is emitted, the virtual reality processor 220 can be configured to determine (for example, through triangulation) the location and orientation of the head mounted video 220 and / or handheld controllers 230 within the workspace. In other variations, other suitable means (for example, other sensor technologies, such as accelerometers and gyroscopes, other sensor arrangements, etc.) can be used to determine the location and orientation of the head mounted video 220 and hand controllers 230. [0070] In some variations, the video mounted on the head 220 may include strips (for example, with buckles, elastic, fittings, etc.) that facilitate the assembly of the video 220 close to the user's head. For example, head mounted video 220 can be structured similar to safety glasses, a tiara or headphones, a hat, etc. Head mounted video 220 may include two headphone mounts providing immersive stereoscopic video, although alternatively it may include any suitable video. [0071] Manual controllers 230 can include interacted components that the user can manipulate to interact with the virtual robotic surgical environment. For example, handheld controllers 230 may include one or more buttons, actuators, sensitive components Petition 870190110594, of 10/30/2019, p. 41/139 34/86 touchscreen, scroll wheels, keys, and / or other suitable interactive components. In addition, handheld controllers 230 can have any one of several shape factors, such as stick, tweezers, generally round shapes (for example, ball or egg shapes), etc. In some variations, the graphic representations 230 'displayed in the head-mounted video 220 and / or in the external video 240 can generally mimic the form factor of the actual real hand controllers 230. In some variations, the hand controller may include a transported device ( eg stick, remote device, etc.) and / or a garment worn in the user's hand (eg, gloves, rings, bracelets, etc.) and including sensors and / or configured to cooperate with external sensors to thereby provide tracking the user's hand (hands), individual finger (s), pulse (s), etc. Other suitable controllers can still or alternatively be used (for example, gloves configured to provide tracking of the user's arm (s). Manual Laparoscopic Controller [0072] In some variations, as shown in the schematic of FIGURE 5A, hand controller 230 may further include at least one component of tool 232 that is representative of at least part of a hand laparoscopic tool, thereby forming a hand laparoscopic controller 234 that can be used to control a virtual manual laparoscopic tool. Generally, for example, tool component 232 can function to adapt hand controller 230 to a controller substantially similar in forming (e.g., user feels and touches) to a manual laparoscopic tool. The manual laparoscopic controller 234 can be communicatively coupled with the virtual reality processor 210 to manipulate a laparoscopic tool Petition 870190110594, of 10/30/2019, p. 42/139 35/86 virtual manual in the virtual robotic surgical environment, and can help to allow the user to feel as if he or she is using a real manual laparoscopic tool while interacting with the virtual robotic surgical environment. In some variations, the manual laparoscopic device may be a model (for example, faithful or generalized version) of a manual laparoscopic tool, while in other variations, the manual laparoscopic device may be a functional manual laparoscopic tool. The movements of at least a part of the manual laparoscopic controller can be mapped by the virtual reality controller to match the movements of the manual virtual laparoscopic tool. Thus, in some variations, the virtual reality system can simulate the use of a manual laparoscopic tool for manual MIS. [0073] As shown in FIGURE 5A, the manual laparoscopic controller 234 can be used with a simulated patient configured to further simulate the feeling of a virtual manual laparoscopic tool. For example, the manual laparoscopic controller 234 can be inserted into a cannula 250 (for example, a real cannula used in MIS procedures to provide a realistic feel of a hand tool inside a cannula, or a suitable representation of it, such as a tube with a lumen to receive a tool shaft part from the 234 manual laparoscopic controller). The cannula 250 can be placed on a simulated patient's abdomen 260, such as a foam body with one or more locations or insertion openings to receive the cannula 250. Alternatively, other suitable simulated patient configurations can be used, such as a cavity providing resistance (for example, with fluid, etc.) with a sensation similar to a real patient's abdomen. Petition 870190110594, of 10/30/2019, p. 43/139 36/86 [0074] Additionally, as shown in FIGURE 5B, the virtual reality processor can generate a virtual robotic surgical environment including a virtual manual laparoscopic tool 236 'and / or a virtual cannula 250' in relation to a virtual patient (e.g. 250 'graphical representation of the cannula represented as inserted in the virtual patient). Thus, the virtual environment with the virtual manual laparoscopic tool 236 'and the virtual cannula 250' can be displayed in the immersive video provided by the head mounted video 220, and / or in the external video 240. A calibration procedure can be performed to map the manual laparoscopic controller 234 for the virtual manual laparoscopic tool 236 'within the virtual environment. As a result, as the user moves and manipulates the manual laparoscopic controller 234, the combination of at least one tool component 234 and the simulated patient configuration can allow the user to feel tactfully as if he or she is using a manual laparoscopic tool in the virtual robotic surgical environment. Likewise, as the user moves and manipulates the manual laparoscopic controller 234, the corresponding movements of the virtual manual laparoscopic tool 236 'can allow the user to view the simulation that he or she is using a manual laparoscopic tool in the surgical environment virtual robotic. [0075] In some variations, the calibration procedure for the manual laparoscopic controller generally maps the manual laparoscopic controller 234 to the virtual manual laparoscopic tool 236 '. For example, in general, the calibration procedure can zero its position in relation to a reference point within the virtual environment. In an illustrative calibration procedure, the user can insert the manual laparoscopic controller through the cannula 250 into the abdomen of the simulated patient 260, which can Petition 870190110594, of 10/30/2019, p. 44/139 37/86 be placed on a table in front of the user (for example, at a height that is representative of the height of a patient table for actual operation). The user can continue to insert the manual laparoscopic controller into the abdomen of the simulated patient 260 to an appropriate depth representative of the depth achieved during an actual laparoscopic procedure. Once the manual laparoscopic controller is properly placed on the abdomen of the simulated patient 260, the user can provide an entry (for example, compress a trigger or push a button on the manual laparoscopic controller, by voice command, etc.) to confirm and guide the virtual patient to the location and height of the simulated patient's abdomen 260. In addition, other aspects of the virtual environment can be calibrated to align with real tangible aspects of the system, such as by representing the virtual components in an adjusted way to the target locations and allow the user to inform to confirm new alignment of the virtual component with the target locations (for example, by pressing a trigger or pressing a button on the manual laparoscopic controller, voice command, etc.). The orientation of virtual components (for example, rotational orientation of an axis) can be adjusted with a sensitive surface, touch, TrackBall, or other suitable input on the handheld laparoscopic controller or other device. For example, the virtual operating room may be aligned with the actual room in which the user is located, a distal end of the virtual cannula or trocar may be aligned with the actual entry location in the simulated patient's abdomen, etc. In addition, in some variations, a virtual actuator (eg, end cutting tool, cutter) may be located and guided via the manual laparoscopic controller to a new location and target orientation in similar ways. Petition 870190110594, of 10/30/2019, p. 45/139 38/86 [0076] In some variations, as shown in FIGURE 5B, the system can include both a handheld controller 230 and a laparoscopic handheld controller 234. Consequently, the virtual reality processor can generate a virtual environment including both a graphical representation 230 'of a manual controller 230 (without laparoscopic connection) as a virtual manual laparoscopic tool 236 'as described above. Hand controller 230 can be communicatively coupled with virtual reality processor 210 to manipulate at least one virtual robotic arm, and manual laparoscopic controller 234 can be communicatively coupled with virtual reality processor 210 to manipulate a virtual hand laparoscopic tool 236 ' . Thus, in some variations, the virtual reality system can simulate a bedside mode of using a robotic surgical system, in which an operator is on the patient's side and manipulating both a robotic arm (for example, with one hand) providing Robot-assisted MIS, as a manual laparoscopic tool providing manual MIS. [0077] The 232 tool component can include any suitable component generally approaching or representing a part of a manual laparoscopic tool. For example, tool component 232 can generally approach a laparoscopic tool axis (for example, include an elongated member extending from a hand held portion of the controller). As another example, the 232 tool component may include a trigger, button, or other laparoscopic interactive component similar to this present in a manual laparoscopic tool that employs an interactive component in the 230 hand controller, but provides a realistic form factor mimicking the feeling of a manual laparoscopic tool (for example, the Petition 870190110594, of 10/30/2019, p. 46/139 39/86 component of tool 232 may include a large driver having a realistic form factor that is superimposed and engages with a generic interactive component in hand controller 230). As yet another example, the tool component 232 can include materials and / or masses selected to create a manual laparoscopic controller 234 having a weight distribution that is similar to a particular type of manual laparoscopic tool. In some variations, the 232 tool component may include plastic (for example, polycarbonate, acrylonitrile butadiene styrene (ABS), nylon, etc.) that is injection molded, machined, 3D printed, or other suitable material formatted in any way appropriate. In other variations, the tool component 232 may include metal or other suitable material that is machined, cast, etc. [0078] In some variations, the component of tool 236 may be an adapter or other connection that is formed separately from hand controller 230 and coupled with hand controller 230 via fasteners (eg screws, magnets, etc.), locking (for example, threads or pressure fitting components, such as flaps and slots, etc.), epoxy, welding (for example, ultrasonic welding), etc. Tool component 236 can be reversibly coupled with hand controller 230. For example, tool component 236 can be selectively connected with hand controller 230 in order to adapt a hand controller 230 when a laparoscopic style hand controller 230 is desired, while tool component 236 can be selectively separated from hand controller 230 when laparoscopic style hand controller 230 is not desired. Alternatively, the tool component 236 can be permanently coupled with the hand grip portion 234, such as during manufacture. In addition, in some variations, the Petition 870190110594, of 10/30/2019, p. 47/139 40/86 hand grip part 234 and tool component 236 can be formed in one piece (for example, injection molded together as a single piece). [0079] An illustrative variation of a manual laparoscopic controller is shown in FIGURE 6A. The laparoscopic hand controller 600 may include a hand holding part 610 (for example, similar to hand controller 230 described above), a tool shaft 630, and a shaft adapter 620 to couple the tool shaft with the hold with hand 610. As shown in FIGURE 6B, the manual laparoscopic controller 600 can generally be used to control a virtual laparoscopic manual stapler tool 600 ', although the manual laparoscopic controller 600 can be used to control other types of manual laparoscopic tools (eg, scissors, dissectors, grippers, needle holders, probes, forceps, biopsy tools, etc.). For example, the hand holding part 610 can be associated with a virtual handle 610 'of the virtual manual laparoscopic stapling tool 600' having a stapling actuator 640 ', so that the user manipulation of the hand holding part 610 is mapped for manipulation of the virtual handle 610 '. Similarly, the axis of tool 630 may correspond to an axis of virtual tool 630 'of the laparoscopic handheld stapler tool 600'. The tool axis 630 and the virtual tool axis 630 'can be inserted into a cannula and a virtual cannula, respectively, so that the movement of the tool axis 630 relative to the cannula is mapped to the movement of the tool axis virtual 630 'inside the virtual cannula in the virtual robotic surgical environment. [0080] The hand holding part 610 may include one or more interactive components, such as the trigger 612 and / or the button 614, Petition 870190110594, of 10/30/2019, p. 48/139 41/86 which can receive user input from the user's fingers, palms, etc., and be communicatively coupled with a virtual reality processor. In this illustrative embodiment, the finger actuator 612 can be mapped to a virtual actuator 612 'on the virtual laparoscopic stapling tool 600'. The virtual actuator 612 'can be viewed as acting on the virtual actuator 640' (for example, causing the virtual members of the virtual actuator 640 'to close and fire clips) to staple virtual tissue in the virtual environment. Consequently, when the user activates the finger trigger 612 on the manual laparoscopic controller, the signal from the finger trigger 612 can be communicated to the virtual reality processor, which modifies the virtual laparoscopic stapler tool 600 'to interact within. of the virtual environment in simulation of a real manual laparoscopic stapling tool. In another variation, a connection from the driver can physically look (for example, in shape and form) with the virtual driver 612 'on the virtual laparoscopic stapler tool 600' and can be coupled with the driver by finger 612, which can allow the 600 manual laparoscopic controller more closely mimics the user's feeling of the 600 'virtual manual laparoscopic stapler tool. [0081] As shown in FIGURES 6C to 6E, the shaft adapter 620 can generally work to couple the tool shaft 630 with the hand holding part 610, which can, for example, adapt a hand controller (similar to manual controller 210 described above) on a manual laparoscopic controller. The shaft adapter 620 can generally include a first end for coupling with the hand holding portion 610 and a second end for coupling with the shaft of the tool 630. As best shown in FIGURE 6E, the first end of the Petition 870190110594, of 10/30/2019, p. 49/139 42/86 shaft adapter 620 may include a proximal part 620a and a distal part 620b configured to attach to a component of the hand holding part 610. For example, the hand holding part 610 may include the generally resembling a ring defining a central space 614 that receives the proximal part 620a and the distal part 620b. The proximal part 620a and the distal part 620b can be fixed on either side of the part looking like a ring in its internal diameter, and be fixed next to the ring-type part via fasteners (not shown) passing through the holes of fastener 622, thus holding the shaft adapter 620 near the hand holding part 610. Additionally, or alternatively, the shaft adapter 620 can be coupled with the hand holding part 610 in any suitable way, such as tight fit, epoxy, component locking (for example, between the proximal part 620a and the distal part 620b), etc. As also shown in FIGURE 6E, the second end of the shaft adapter 620 can include a recess to receive the tool shaft 620. For example, the recess can be generally cylindrical to receive a generally cylindrical end of a part of the tool shaft 630 , such as by pressure fitting, friction fitting, or other tight fitting. Additionally or alternatively, the tool spindle 620 can be coupled with the spindle adapter 620 with fasteners (eg screws, nut bolts, epoxy, ultrasonic solder, etc.). The tool axis 630 can be of any suitable size (for example, length, diameter) to imitate or represent a manual laparoscopic tool. [0082] In some variations, the 620 shaft adapter can be selectively removable from the 610 hand holding part to allow selective use of the 610 hand holding part as much as an independent hand controller (eg controller Petition 870190110594, of 10/30/2019, p. 50/139 43/86 manual 210), as well as a manual laparoscopic controller 600. Additionally or alternatively, the tool shaft 630 can be selectively removable from the shaft adapter 620 (for example, the shaft adapter 620 can be intentionally attached to the hand held part 610, tool shaft 620 can be selectively removable from shaft adapter 620 to convert laparoscopic hand control 600 to an independent hand controller 210). [0083] Generally, the tool component of the manual laparoscopic controller 600, such as the shaft adapter 620 and the tool shaft 630, can be manufactured from a rigid or semi-rigid plastic or metal, and can be manufactured through any process. suitable manufacturing, such as 3D printing, injection molding, milling, turning, etc. The tool component can include various types of materials, and / or weights or other masses to further simulate the user's feeling for a particular manual laparoscopic tool. System variations [0084] One or more aspects of the virtual reality system described above can be incorporated into other variations of systems. For example, in some variations, a virtual reality system to provide a virtual robotic surgical environment can interface with one or more components of a real robotic surgical environment. For example, as shown in FIGURE 3, a system 700 can include one or more processors (for example, a virtual reality processor 210) configured to generate a virtual robotic surgical environment, and one or more sensors 750 in a robotic surgical environment, where the one or more sensors 750 are in communication with the one or more processors. Sensor information from the robotic surgical environment can be configured Petition 870190110594, of 10/30/2019, p. 51/139 44/86 to detect condition of a component of the robotic surgical environment, such as to imitate or replicate components of the robotic surgical environment in the virtual robotic surgical environment. For example, a user can monitor a real robotic surgical procedure in a real operating room via a virtual reality system that interfaces with the real operating room (for example, the user can interact with a virtual reality environment that is reflective conditions in the actual operating room). [0085] In some variations, one or more 750 sensors can be configured to detect the condition of at least one robotic component (for example, a component of a robotic surgical system, such as a robotic arm, a tool controller coupled with an arm robotic, a patient operating table to which a robotic arm is attached, a control tower, etc.) or another component of a robotic surgical operating room. Such a condition may indicate, for example, position, orientation, pace, speed, operating status (for example, on or off, power level, mode), or any other suitable condition of the component. [0086] For example, one or more accelerometers can be coupled with a robotic arm connection and be configured to provide information on the position, orientation and / or speed of movement of the robotic arm connection, etc. Various accelerometers on various robotic arms can be configured to provide information regarding obstructive and / or present collisions between robotic arms, between different connections of a robotic arm, or between a robotic arm and a nearby obstacle having a known position. [0087] As another example, one or more proximity sensors (for example, infrared sensor, capacitive sensor) can be Petition 870190110594, of 10/30/2019, p. 52/139 45/86 coupled with a part of a robotic arm or with other components of the robotic surgical system or surgical environment. Such proximity sensors can, for example, be configured to provide information regarding obstructive collisions between objects. Additionally or alternatively, contact or touch sensors can be coupled with a part of a robotic arm or other components of the robotic surgical environment, and can be configured to provide information regarding a collision between objects. [0088] In another example, one or more components of the robotic surgical system or surgical environment may include markers (for example, infrared markers) to facilitate optical tracking of the position, orientation, and / or speed of various components, such as suspended sensors monitoring markers in the surgical environment. Similarly, the surgical environment may additionally or alternatively include cameras to scan and / or model the surgical environment and its contents. Such optical tracking sensors and / or cameras can be configured to provide information regarding obstructive and / or present collisions between objects. [0089] As another example, one or more 750 sensors can be configured to detect a condition of a patient, a surgeon, or another surgical team. Such a condition can indicate, for example, position, orientation, pace, speed, and / or biological metrics such as heart rate, blood pressure, temperature, etc. For example, a heart rate monitor, blood pressure monitor, thermometer, and / or oxygen sensor, etc., can be coupled with the patient and allow a user to keep track of the patient's condition. [0090] Generally, in these variations, a reality processor Petition 870190110594, of 10/30/2019, p. 53/139 46/86 of virtual 210 can generate a virtual robotic surgical environment similar to that described anywhere in this document. In addition, upon receipt of the condition information from one or more sensors 750, the virtual reality processor 210 or another processor in the system may incorporate the detected condition in any one or more suitable ways. For example, in one variation, the virtual reality processor 210 can be configured to generate a virtual reality replica or almost a replica of a robotic surgical environment and / or a robotic surgical procedure performed at this location. For example, the one or more 750 sensors in the robotic surgical environment can be configured to detect a condition of a robotic component corresponding to the virtual robotic component in the virtual robotic surgical environment (for example, the virtual robotic component can be substantially representative of the robotic component in visually and / or in function). In this variation, the virtual reality processor 210 can be configured to receive the detected condition from the robotic component, and then modify the virtual robotic component based at least in part on the detected condition so that the virtual robotic component mimics the robotic component. For example, if a surgeon moves a robotic arm during a robotic surgical procedure to a particular position, then a virtual robotic arm in the virtual environment can move accordingly. [0091] As another example, the virtual reality processor 210 can receive condition information indicating an alarm event, such as an obstructive or present collision between objects, or a poor health condition of the patient. Upon receiving such information, the virtual reality processor 210 can provide a warning or alarm to the user about the occurrence of the event, such as by displaying a visual alert (for example, text, icon indicating collision, a Petition 870190110594, of 10/30/2019, p. 54/139 47/86 seen inside the virtual environment representing the collision, etc.,), audio alert, etc. [0092] As yet another example, the one or more sensors in the robotic surgical environment can be used to compare an actual surgical procedure (occurring in the non-virtual robotic surgical environment) with a surgical procedure planned as planned in a virtual robotic surgical environment. For example, an expected position of at least one robotic component (for example, robotic arm) can be determined during surgical pre-planning, as viewed as a corresponding virtual robotic component in a virtual robotic surgical environment. During an actual surgical procedure, one or more sensors can provide information on a measured position of the actual robotic component. Any differences between the expected position and measurement of the robotic component may indicate deviations from a surgical plan that was built in the virtual reality environment. Since such deviations can eventually result in unintended consequences (eg, unintended collision between robotic arms, etc.), deviation identification can allow the user to adjust the surgical plan (eg reconfigure the approach to a surgical site, change instruments surgical, etc.). User modes [0093] Generally, the virtual reality system can include one or more user modes allowing a user to interact with the virtual robotic surgical environment by moving and / or manipulating manual controllers 230. Such interactions can include, for example, moving objects (eg, virtual robotic arm, virtual tool, etc.) in the virtual environment, add camera viewpoints to view the virtual environment simultaneously from various observation points, navigate within the virtual environment without requiring Petition 870190110594, of 10/30/2019, p. 55/139 48/86 that the user moves the video mounted on the head 220 (for example, per floor), etc., as further described below. [0094] In some variations, the virtual reality system may include several user modes, in which each user mode is associated with a respective subset of user interactions. As shown in FIGURE 8, at least some of the user modes can be presented in a video (for example, head mounted video 220) for user selection. For example, at least some of the user modes may correspond to the selectable user mode icons 812 displayed in a user mode menu 810. The user mode menu 810 can be overlaid on the virtual robotic surgical environment video so that a 230 'graphical representation of the hand controller (or the user's hand, or other suitable representative icon, etc.) can be maneuvered by the user to select a user mode icon, thereby activating the user mode corresponding to the user icon selected user mode. As shown in FIGURE 8, user mode icons 812 can generally be arranged in a palette or circle, but can alternatively be arranged in a grid or other suitable arrangement. In some variations, a selected subset of possible user modes can be displayed in menu 810 based, for example, on user preferences (for example, associated with a set of user connection information), user preferences similar to the current user , type of surgical procedure, etc. [0095] FIGURE 14 illustrates a method of operation 1400 of an illustrative variation of a user mode menu providing selection of one or more user mode icons. To activate the user menu, the user can activate a user input method associated with the menu. For example, an input method can be Petition 870190110594, of 10/30/2019, p. 56/139 49/86 activated by a user employing a hand controller (for example, hand user interface device), such as by pressing a button or other suitable component on the hand controller (1410). As another example, an input method can be activated by a user using a pedal or another component of a user's console (1410 ’). Voice commands and / or other devices can additionally or alternatively be used to activate an input method associated with the menu. While the input method is employed (1412), the virtual reality system can produce and require an arrangement of user mode icons (for example, arranged on a palette around a central origin as shown in FIGURE 8A). The array of user mode icons can generally be displayed near or around a graphical representation of the hand controller and / or on a synthesized cursor that is controlled by the hand controller [0096] For example, in a variation in which a hand controller includes a circular menu button and a graphical representation of the hand controller also has a circular menu button displayed in the virtual reality environment, the arrangement of user mode icons can be centered around and aligned with a menu button so that the normal vectors of the menu plan and the menu button are substantially aligned. The circular or radial menu can include, for example, several different menu regions (1414) or sectors, each of which can be associated with a range of angles (for example, an arcuate segment of the circular menu) and a mode icon. user (for example, as shown in FIGURE 8). Each region can be switched between a selected state and an unselected state. [0097] Method 1400 can generally include determining user selection of a user mode and receiving confirmation that the Petition 870190110594, of 10/30/2019, p. 57/139 50/86 user would like to activate the selected user mode for the virtual reality system. To select a user mode in the user mode menu, the user can move the hand controller (1420) to freely manipulate the graphical representation of the hand controller and navigate through the user mode icons in the user mode menu. Generally, the position / orientation of the hand controller (and the position / orientation of the graphical representation of the hand controller that moves according to the hand controller) can be analyzed to determine whether the user has selected a particular user mode icon. For example, in variations in which user mode icons are arranged in a generally circular palette around a central origin, the method may include determining the radial distance and / or the angular orientation of the graphical representation of the hand controller in relation to the central origin. For example, a test to determine user selection for a user mode icon can include one or more decision boxes, which can be satisfied in any suitable order. In a first decision box (1422), the distance from the graphical representation of the hand controller to the center of the user mode menu (or from another reference point in the user mode menu) is compared with a limit distance. The distance can be expressed in terms of absolute distance (for example, number of pixels) or proportions (for example, percentage of distance between a center point and the user mode icons arranged around the periphery of the user mode menu, such as 80% or more). If the distance is less than the limit, then it can be determined that no user mode icon is selected. Additionally or alternatively, the selection of a user mode icon may depend on a second decision box (1424). In the second decision box (1424), the orientation of the graphical representation of the hand controller is measured and Petition 870190110594, of 10/30/2019, p. 58/139 51/86 correlated with a user mode icon associated with an arcuate menu segment. If the orientation corresponds to an arcuate segment selected from the menu, then it can be determined that a particular user mode (associated with the selected arcuate segment) is selected by the user. For example, a user mode icon can be determined as selected by the user if both the distance and the angular orientation of the graphical representation of the hand controller in relation to the origin satisfy the conditions (1422) and (1424). [0098] After determining that a user has selected a particular user mode icon, the method may, in some variations, carry that selection to the user (for example, as confirmation) by visual and / or auditory indications. For example, in some variations, the method may include synthesizing one or more visual cues (1430) in the virtual reality environment displayed in response to determining that a user has selected a user mode icon. As shown in FIGURE 14, illustrative visual cues (1432) include modifying the appearance of the selected user mode icon (and / or the arcuate segment associated with the selected user mode icon) with highlighting (for example, thickened outer lines) , animation (for example, wavy lines, dancing or pulsating icon), change in size (for example, increase in icon), change in apparent depth, change in color or opacity (for example, more or less translucent, change in pattern fill icon), change in position (for example, moving radially outward or inward from the central origin, etc.), and / or any suitable visual modification. In some variations, indicating to the user in these ways in other appropriate ways can inform the user that the user mode will be activated, before the user confirms the selection of a particular user mode. Per Petition 870190110594, of 10/30/2019, p. 59/139 For example, the method may include producing one or more visual cues (1430) as the user navigates or scrolls through the various user mode icons in the menu. [0099] The user can confirm the approval of the selected user mode icon in one or more several ways. For example, the user can release or disable the user input method (1440) associated with the menu (for example, by releasing a button on the hand controller, removing one from the pedal), in order to indicate the approval of the user mode selected. In other variations, the user can confirm the selection by hovering over the selected user mode icon for at least a predetermined period of time (for example, at least 5 seconds), double-clicking on the user input method associated with the user menu (for example, double-clicking the button, etc.), speaking a verbal command indicating approval, etc. [00100] In some variations, upon receiving confirmation that the user approves the selected user mode, the method may include checking which user mode icon has been selected. For example, as shown in FIGURE 14, a test to check which user mode icon was selected can include one or more decision boxes, which can be satisfied in any suitable order. For example, in variations in which user mode icons are arranged in a generally circular palette around a central origin, the method may include determining the radial distance relative to the central origin (1442) and / or the angular orientation of the representation graphic of the hand controller in relation to the central source (1446) when the user indicates approval of the user mode icon selection. In some variations, the decision boxes (1442) and (1446) can be similar to the decision boxes (1422) and (1424) described above, respectively. If at least one of these boxes Petition 870190110594, of 10/30/2019, p. 60/139 53/86 decision (1442) and (1444) is not satisfied, then the release of the user input method can be correlated with a non-selection of a user mode icon (for example, the user may have changed his thinking about select a new user mode). Consequently, if the graphical representation of the hand controller fails to satisfy the distance limit (1442), then the original or previous user mode can be retained (1444). Similarly, if the graphical representation of the hand controller fails to match an arcuate menu segment (1446), then the original or previous user mode can be retained (1448). If the graphic representation of the hand controller does not meet the distance limit (1442) and corresponds to an arcuate segment of the menu, then the selected user mode can be activated (1450). In other variations, a user mode can additionally or alternatively be selected with other interactions, such as voice command, eye tracking via sensors, etc. In addition, the system may additionally or alternatively suggest activation of one or more user modes based on criteria such as user activity (for example, if the user is frequently turning his head to see details regarding the edge of his field of view , the system can suggest a user mode allowing replacement of a camera to provide a view of a transparent panel window from a desired observation point, as described above), type of surgical procedure, etc. Object Grip [00101] An illustrative user mode with the virtual robotic surgical environment allows a user to hold, move, or otherwise manipulate virtual objects in the virtual environment. Examples of manipulable virtual objects include, but are not limited to, virtual representations of physical items (for example, one or more robotic arms Petition 870190110594, of 10/30/2019, p. 61/139 54/86 virtual cos, one or more virtual tool controllers, virtual manual laparoscopic tools, virtual patient operating table or other support surface, virtual control tower or other equipment, virtual user console, etc.) and other constructions virtual or graphic such as portals, window display, patient image presentation or other projections on a transparent panel, etc., which are additionally described below. [00102] At least some of the virtual objects can include or be associated with at least one point of contact or selectable virtual component. When the virtual contact point is selected by a user, the user can move (for example, adjust the position and / or orientation) the virtual object associated with the selected virtual contact point. In addition, several virtual touch points can be selected simultaneously (for example, several manual controllers 230 and their graphic representations 230 ’) on the same virtual object or on several separate virtual objects. [00103] The user can usually select a virtual contact point by moving a hand controller 230 to correspondingly move a graphical representation 230 'to the virtual contact point in the virtual environment, then employ an interactive component such as a driver or button on hand controller 230 to indicate selection of the virtual contact point. In some variations, a virtual contact point may remain selected as long as the user employs the interactive component in the hand controller 230 (for example, as long as the user presses a trigger) and may become unselected when the user releases the interactive component. For example, the virtual contact point can allow the user to click and drag the virtual object via the virtual contact point. In some variations, a virtual contact point can be switched between a selected state and an unselected state, Petition 870190110594, of 10/30/2019, p. 62/139 55/86 by the fact that a contact point can remain selected after a single use of the interactive component in the virtual controller until a second use of the interactive component switches the virtual contact point to an unselected state. In the virtual robotic surgical environment, one or both types of virtual contact points may be present. [00104] A virtual object can include at least one virtual contact point for direct manipulation of the virtual object. For example, a virtual robotic arm in the virtual environment may include a virtual contact point in one of its virtual arm connections. The user can move a hand controller 230 until the graphical representation 230 'of the hand controller is close (for example, hovering) to the virtual contact point, employ a trigger or other interactive component in the hand controller 230 to select the virtual contact point , and then move hand controller 230 to manipulate the virtual robotic arm via the virtual contact point. As a result, the user can manipulate the hand controller 230 to reposition the virtual robotic arm in a new position, such as to create a more spacious workspace in the virtual environment by the patient, test the range of motion of the virtual robotic arm to determine the probability of collisions between the virtual robotic arm and other objects, etc. [00105] A virtual object can include at least one virtual contact point that is associated with a second virtual object, for indirect manipulation of the second virtual object. For example, a virtual control panel may include a virtual point of contact on a virtual key or button that is associated with the patient's operating table. The virtual key or button can, for example, control the height or angle of the virtual patient operating table in the virtual environment, similar to how a key or button on a control panel Petition 870190110594, of 10/30/2019, p. 63/139 56/86 real trolley can electronically or mechanically modify the height or angle of a real patient operating table. The user can move a hand controller 230 until the graphical representation 230 'of the hand controller is close (for example, hovering) to the virtual contact point, employ a trigger or other interactive component in the hand control 230 to select the virtual contact point , and then move hand controller 230 to manipulate the virtual key or button via the virtual contact point. Accordingly, the user can manipulate the hand controller 230 to modify the height or angle of the virtual environment, such as to improve the approach angle or access to a workspace in the virtual environment. [00106] When the virtual contact point is selected, the virtual reality processor can modify the virtual robotic surgical environment to indicate to the user that the virtual contact point is actually selected. For example, the virtual object including the virtual contact point can be highlighted by being graphically synthesized in a different color (for example, blue or red) and / or contoured in a different weight or line color, in order to visually contrast the virtual object affected from other virtual objects in the virtual environment. In addition or alternatively, the virtual reality processor can provide audio feedback (for example, a signal, audible alarm, or verbal recognition) through an audio device indicating selection of the virtual contact point, and / or tactile feedback (for example , vibration) through a hand controller 230, head mounted video 220, or other suitable device. Navigation [00107] Other illustrative user modes with the virtual robotic surgical environment can allow a user to browse and explore the Petition 870190110594, of 10/30/2019, p. 64/139 57/86 virtual space within the virtual environment. Instant points [00108] In some variations, the system may include a user mode allowing instant points, or virtual targets within a virtual environment that can be used to assist the user's navigation within the virtual environment. An instant point can, for example, be placed in a location selected by the user or pre-established within the virtual environment and allow a user to quickly navigate to this location when selecting the instant point. An instantaneous point may, in some variations, be associated with an orientation within the virtual environment and / or with an apparent scale (zoom level) of the environment display from this observation point. Instant points can, for example, be visually indicated as colored points or as other colored markers graphically displayed in the first person perspective view. By selecting an instantaneous point, the user can be transported to the observation point of the selected instantaneous point within the virtual robotic surgical environment. [00109] For example, FIGURE 16 illustrates the method of operation 1600 of an illustrative variation of a user mode allowing instantaneous points. As shown in FIGURE 16, an instantaneous point can be positioned (1610) in the virtual environment by a user or as a predetermined configuration. For example, a user can navigate through a user mode menu as described above, and select or capture an instant point icon from the menu with a hand controller (for example, indicated with a colored dot or another suitable marker ) and drag and drop the instant point icon next to a desired location and / or orientation in the virtual environment. The instantaneous point can, in some variations, be repositioned by the selected user Petition 870190110594, of 10/30/2019, p. 65/139 58/86 nating the instantaneous point again (for example, moving the graphical representation of the hand controller until it intersects with the instantaneous point or a collision volume limit around the instantaneous point, and then employing such an input component such as a button or trigger) and dragging and dropping the instant point icon to a new desired location. In some variations, the user can establish the scale or zoom level of the observation point (1620) associated with the instantaneous point, such as by adjusting a slide bar or scroll wheel displayed, by movements as described above to establish a level of scale for environmental manipulation, etc. The snapshot point may, in some instances, have a preset scale level associated with all or a subcategory of Snapshot points, a scale level associated with the user's current observation point when the user places the snapshot point, or adjusted as described above. In addition, once an instantaneous point is placed, the instantaneous point can be stored (1630) in memory (for example, local or remote storage) for future access. An instantaneous point can, in some variations, be deleted from the virtual environment and from memory. For example, an instantaneous point can be selected (in a similar way to repositioning the instantaneous point) and designed for erasure by dragging it off the screen to a predetermined location (for example, virtual trash can) and / or by moving it at a predetermined speed (for example, throwing in a direction out of the user's observation point at a speed greater than a predetermined limit), by selecting a secondary menu option, by voice command, etc. . [00110] One or more instant points for a virtual environment are stored in memory, and the user can select one of the Petition 870190110594, of 10/30/2019, p. 66/139 59/86 instant points stored (1640) for use. For example, when selecting a stored instant point, the user's observation point can be adjusted to the position, orientation, and / or scale of the selected instant point settings (1650), thereby allowing the user to feel as if he teleporting the location associated with the selected instant point. In some variations, the user's previous observation point can be stored as an instantaneous point (1660) to facilitate undoing the user's perceived teleportation and moving the user back to their previous observation point. Such an instantaneous point can be temporary (for example, disappear after a predetermined period of time, such as after 5 to 10 seconds). In some instances, the user's previous observation point can be stored as an instantaneous point only if the user's previous location was not a pre-existing instantaneous point. In addition, in some variations, a trail or virtual trajectory (for example, line or arc) can be displayed in the virtual environment by connecting the user's previous observation point with the user's new observation point associated with the selected instant point, the which can, for example, provide the user with context as to how he teleported within the virtual environment. Such visual indication can be removed from the display of the virtual environment after a predetermined period of time (for example, after 5 to 10 seconds). [00111] Generally, in some variations, an instantaneous point can operate in a similar way to the portals described below, except that an instantaneous point can indicate an observation point without providing an early window view of the virtual environment. For example, Instant points can be placed at observation points selected by the user outside and / or inside the patient Petition 870190110594, of 10/30/2019, p. 67/139 60/86 virtual, and can be linked in one or more trajectories, similar to the portals described above. In some variations, instantaneous point trajectories can be established by the user in a similar way to the one described below for portals. Portals [00112] In some variations, the system may include a user mode that facilitates the placement of one or more portals, or teleportation points, in locations selected by the user in the virtual environment. Each portal can, for example, serve as a transport portal to a corresponding observation point in the virtual environment, thus allowing the user to quickly change observation points for viewing and navigating the virtual environment. Generally, when selecting (for example, with one or more manual controls 230) a portal, the user's apparent location can change to the location of the selected portal, so that the user sees the virtual environment from the vantage point of the selected portal and has the feeling of being demeaning around the virtual environment. By placing one or more portals around the virtual environment, the user may have the ability to quickly move between multiple observation points. The placement, adjustment, storage and / or navigation of portals around the virtual environment can be similar to this of the Instant Points described above. [00113] For example, as generally described above, the system can display a first-person perspective view of the virtual robotic surgical environment from a first observation point within the virtual robotic surgical environment. The user can navigate through a menu to select a user mode that allows the placement of a portal. As shown in FIGURE 9A, the user can manipulate the 230 ’graphical representation of the hand controller to position a 910 portal at a selected location in the am Petition 870190110594, of 10/30/2019, p. 68/139 61/86 virtual environment. For example, the user can employ a component (for example, trigger or button) in the hand controller while a user mode allowing portal placement is enabled, so that while the component is employed and the user moves the position and / or the guidance of the hand controller, a 910 portal can appear and be moved within the virtual environment. One or more portal placement indicators 920 (for example, one or more arrows, a line, an arc, etc. connecting the graphical representation 230 'with a likely portal location) can assist in communicating to the user the likely location of a 910 portal, as well as for helping with depth perception. The size of the 910 portal can be adjusted by capturing and elongating or contracting the sides of the 910 portal via hand controllers. When the location of portal 910 is confirmed (for example, by the user releasing the component used in the hand controller, double click, etc.), the user's apparent location within the virtual environment can be updated to match the observation point associated with the portal 910. In some variations, as described below, at least some observation points within the virtual location may be prohibited. These prohibited observation points can be stored in memory (for example, local or remote storage). In these variations, if a 910 portal location is confirmed in a prohibited location (for example, compared and associated with a list of prohibited observation points stored in memory), then the user's apparent location within the virtual environment can be maintained without changes. However, if a 910 portal location is confirmed as allowed (for example, compared and not associated in a list of prohibited observation points), then the user's apparent location within the virtual environment can be updated as described above. Petition 870190110594, of 10/30/2019, p. 69/139 62/86 [00114] In some variations, once the user has placed portal 910 in a desired observation point, a window view of the virtual environment from the observation point of portal 910 placed to be displayed within portal 910, of that mode offering a preview of the view offered by portal 910. For example, the user can see through portal 910 with full parallax, so that portal 910 behaves like a type of magnifying lens. For example, while looking through the 910 portal, the user can see the virtual environment as if the user had been scaled to the inverse of the scale fact of the portal (which affects both the distance between the pupils and the focal length) and how if the user had moved to the reciprocal of the portal scale fact (1 / portal scale factor) from the distance from portal 910 to the user's current location. Additionally, portal 910 can include an event horizon that can be a texture in a plane that is synthesized, for example, using one or more additional cameras (described below) within the scene of the virtual environment positioned as described above. In these variations, when traveling through portal 910 after selecting portal 910 for teleportation, the user's view of the virtual environment can naturally converge with the user's apparent observation point during the user's approach to the portal, provided that the user observation is shifted as a fraction of the distance from the portal (by 1 / portal scale factor). Consequently, the user can feel as if he is smoothly and naturally entering the visualization of the virtual environment in the scale fact associated with the selected portal. [00115] As shown in FIGURE 9A, in some variations, portal 910 can be generally circular. However, in other variations, one or more 910 portals can be any suitable shape, such as elliptical, square, rectangular, irregular, etc. In addition Petition 870190110594, of 10/30/2019, p. 70/139 63/86 window view of the virtual environment that is displayed on the portal can display the virtual environment at a scale factor associated with the portal, so that the view of the virtual environment displayed on different portals can be displayed at different zoom levels ( for example, 1x, 1.5x, 2x, 2.5x, 3x, etc.), thereby also changing the user's scale in relation to the environment. The scale of the window view in a portal can also indicate or correspond to the scale of the view that would be displayed if the user is transported to this observation point of the portal. For example, if a view of the virtual environment outside a virtual patient is around 1x, then a window view of the environment inside the virtual patient can be around 2x or more, thereby providing a user with more details of the internal tissue of the virtual patient. The scale fact can be defined by the user or predetermined by the system (for example, based on the location of the portal in the virtual environment). In some variations, the scale fact may correlate with the displayed size of the 910 portal, although in other variations, the scale fact may be independent of the portal size. [00116] In some variations, a 910 portal can be placed substantially at any observation point in the virtual environment that the user desires. For example, a 910 portal can be placed anywhere on a virtual land surface of the virtual operating room or on a virtual object (for example, table, chair, user console, etc.). As another example, as shown in FIGURE 9B, a portal 910 can be placed above the ground at any suitable elevation above the virtual land surface. As yet another example, as shown in FIGURE 9C, a portal can be placed on or inside a virtual patient, such as portals 910a and 910b which are placed on a patient's abdomen and allow views of the intestines and other internal organs of the patient. patient Petition 870190110594, of 10/30/2019, p. 71/139 64/86 virtual (for example, simulated augmented reality). In this example, the virtual patient can be generated from capturing medical images and other information for a real (non-virtual) patient, so that portals 910a and 910b can allow the user to have an immersive view of an accurate representation of the real patient's tissue (for example, to view tumors, etc.), and / or generated from internal virtual cameras (described below) placed inside the patient. In some variations, the system may limit the placement of a 910 portal according to predefined guidelines (for example, only outside the patient or only inside the patient), which may correspond, for example, to a type of simulated surgical procedure or to a training level (for example, beginner or advanced user level) associated with the virtual environment. Such prohibited locations may be indicated to the user, for example, by a visual change in the 910 portal as it is being placed (for example, change of outline color, display of a gray or opaque window view within the 910 portal. as he is feeling placed) and / or auditory indications (eg, audible alarm, audible signals, verbal feedback). In yet other variations, the system may additionally or alternatively include one or more 910 portals placed in predetermined locations, such as in a virtual user console in the virtual environment, adjacent to the virtual patient table, etc. Such predetermined locations can, for example, depend on the type of procedure, or be saved as part of a configuration file. [00117] A 910 portal can be visible from any side (for example, front and rear side) of the portal. In some variations, the view from one side of portal 910 may be different from the opposite side of portal 910. For example, when viewed from the first side (for example, front) of portal 910, the portal may Petition 870190110594, of 10/30/2019, p. 72/139 65/86 provide a view of the virtual environment with a scale factor and parallel effects as described above, while when viewed from a second side (eg rear) of portal 910, the portal can provide a view of the virtual environment with a scale factor around one. As another example, the portal can provide a view of the virtual environment with a fact of scale and parallel effects when viewed from both the first and second sides of the portal. [00118] In some variations, several 910 portals can be sequentially linked to include a trajectory in the virtual environment. For example, as shown in FIGURE 9C, a first-person perspective view of the virtual robotic surgical environment from a first observation point can be displayed (for example, an immersive view). The user can place a first 910a portal at a second observation point that is different from the first observation point (for example, closer to the virtual patient than the first observation point) and a first view and window of the virtual robotic surgical environment from the second observation point it can be displayed on the first portal 910a. Similarly, the user can place a second 910b portal at a third observation point (for example, closer to the patient than the first and second observation points), and a second window view of the virtual robotic surgical environment can be displayed on the second portal 910b. The user can provide a user input by associating the first and second portals 910a and 910b (for example, by selecting with the hand controllers, drawing a line between the first and second portals with the hand controllers, etc.) so that the first and second portals are sequentially linked, thereby generating a path between the first and second portals. Petition 870190110594, of 10/30/2019, p. 73/139 66/86 [00119] In some variations, after several 910 portals are connected to generate a trajectory, transport along the trajectory may not require explicit selection of each sequential portal. For example, once in the trajectory (for example, at the second observation point), the displacement between connected portals can be accomplished by using a trigger, button, touch-sensitive surface, scroll wheel, another interactive component of the hand controller, voice command, etc. [00120] Additional portals can be linked in a similar way. For example, two, three, or more portals can be connected in series to generate an extended trajectory. As another example, several portals can form branched paths, where at least two paths share at least one portal in common, but otherwise each path has at least one portal that is unique to that path. As yet another example, several portals can form two or more trajectories that do not share portals in common. The user can select which path to travel, such as using manual controllers and / or voice commands, etc. One or more paths between portals can be visually indicated (for example, with a dotted line, color coding of portals along the same path, etc.), and such visual path indication can be switched on and off, as based user preference. [00121] Other portal components can facilitate easy navigation of trajectories between portals. For example, a portal can change the color when the user has entered and moved through that portal. As shown in FIGURE 9C, in another example, a portal itself can be displayed with direction arrows indicating the permitted direction of the path including that portal. In addition, displacement through paths can be performed with an undo command Petition 870190110594, of 10/30/2019, p. 74/139 67/86 (via manual controllers and / or voice command, etc.) that returns the user to the previous observation point (for example, displays the view of the virtual environment from the previous observation point). In some variations, an initial or pre-established observation point can be established (such as according to the user's preference or with system settings) in order to allow a user to return to that initial observation point quickly with a shortcut command. , such as an interactive component on a hand controller or a voice command (for example, Restore my position). For example, an initial or pre-established observation point can be on a virtual user console or adjacent to the virtual patient table. [00122] The user mode facilitating the placement and use of portals, or another separate user mode, can also facilitate the deletion of one or more portals. For example, a portal can be selected for deletion with manual controllers. As another example, one or more portals can be selected for deletion via voice command (for example, delete all portals or delete portal A). Free navigation [00123] The system can include a user mode that facilitates free navigation around the virtual robotic surgical environment. For example, as described in this document, the system can be configured to detect the user's walking movements based on sensors in the video mounted on the head and / or on the manual controllers, and can correlate the user's movements when repositioning inside an operating room. virtual. [00124] In another variation, the system may include a flight mode that allows the user to quickly navigate the virtual environment in a way by flying at different elevations and / or speeds, in Petition 870190110594, of 10/30/2019, p. 75/139 68/86 different angles. For example, the user can navigate in flight mode by directing one or more hand controllers and / or head equipment in a desired direction for flight. Interactive components in the hand controller can also control the flight. For example, a base or surface sensitive to directional touch can provide control for forward, backward, bombing, etc. while maintaining substantially the same perspective view of the virtual environment. Translations can, in some variations, occur without acceleration, as acceleration may tend to increase the likelihood of nausea from the simulator. In another user configuration, a base or surface sensitive to directional touch (or head device orientation) can provide control for elevating the user's apparent location within the virtual environment. In addition, in some variations, similar to that described above with respect to portals, an initial or pre-established observation point within flight mode can be established in order to allow a user to return to that initial observation point quickly with a command shortcut. Parameters such as flight speed in response to user input can be adjustable by the user and / or set by the system by a pre-established pattern. [00125] In addition, in flight mode, the scaling of the displayed view can be controlled via manual controllers. The fact of sizing can, for example, affect the apparent elevation of the user's location within the virtual environment. In some variations, the user can use the manual controllers to separate two points in the displayed view to zoom in and bring two points closer in the displayed view to zoom out, or conversely separate two points in the displayed view to zoom in and bring two points in the view displayed to decrease. Additionally or toggles Petition 870190110594, of 10/30/2019, p. 76/139 69/86 tively, the user can use voice commands (for example, zoom in to 2x) to change the scaling factor of the displayed view. For example, FIGURES 10A and 10B illustrate illustrative views of the virtual environment that are relatively increased and decreased, respectively. Parameters such as speed of change in the scale fact, range of minimum and maximum scale factors, etc., can be adjusted by the user and / or established by the system by a pre-established standard. [00126] As the user freely navigates the virtual environment in flight mode, the displayed view may include components to reduce eye fatigue, nausea, etc. For example, in some variations, the system may include a comfortable mode in which external regions of the displayed view are removed as the user navigates in flight mode, which can, for example, help reduce movement nausea for the user . As shown in FIGURE 10C, when in comfort mode, the system can define a transition region 1030 between an internal transition boundary 1010 and an external transition boundary 1020 around a focal area (for example, center) of the view. user. Within the transition region (within the 1010 internal transition limit), a normal view of the virtual robotic surgical environment is displayed. Outside the transition region (outside the 1020 external transition boundary), a neutral view or smooth background (for example, a smooth gray background) is displayed. Within the 1030 transition region, the displayed view may have a gradient that gradually changes from the virtual environment view to the neutral view. Although the transition region 1030 shown in FIGURE 10C is represented as generally circular, with internal and external transition limits generally circular 1010 and 1020, in other variations, the internal and external transition limits 1010 and 1020 may define a transition region 1030 that is elliptical or of another suitable shape. Besides that, Petition 870190110594, of 10/30/2019, p. 77/139 70/86 in some variations, various parameters of the transition region, such as size, shape, gradient, etc., can be adjustable by the user and / or established by the system by pre-established condition. [00127] In some variations, as shown in FIGURE 11, the user can view the virtual robotic surgical environment from a dollhouse view that allows the user to view the virtual operating room from an observation point above the head, with a top-down perspective. In the dollhouse view, the virtual operating room can be displayed in a smaller scale factor (for example, smaller than the live size) in the video, thereby changing the user's scale relative to the virtual operating room . The dollhouse view can provide the user with additional contextual science of the virtual environment, as the user can see the entire virtual operating room at once, as well as the layout of its contents, such as virtual equipment, virtual staff, virtual patient, etc. Through the dollhouse view, for example, the user can rearrange the virtual objects in the virtual operating room with more total contextual science. The dollhouse view can, in some variations, be linked in a trajectory with portals and / or instantaneous points described above. Rotation of the environment view [00128] In some variations, the system may include a user model that allows the user to navigate the virtual robotic surgical environment by moving the virtual environment around its current observation point. The environment view rotation mode can offer a different way in which the user can navigate the virtual environment, such as by capturing and manipulating the environment as if it were an object. As the user navigates through the virtual environment in such a way, a comfort mode similar to the one described above can additionally be implemented to help reduce the Petition 870190110594, of 10/30/2019, p. 78/139 71/86 nausea related to simulation. For example, in an environment view rotation mode, the user can rotate a displayed scene around a current observation point by selecting and dragging the view of the virtual environment around the user's current observation point. In other words, in the environment view rotation mode, the user's apparent location in the virtual environment seems fixed while the virtual environment can be moved. This is in contrast to other modes, such as, for example, the flight mode described above, in which the environment generally appears fixed while the user moves. Similar to the scaling factor settings described above for flight mode, in room view rotation mode, the scaling factor of the displayed room view can be controlled by manual controllers and / or by voice commands (for example, for using the manual controllers to select and separate two points in the displayed view to enlarge, etc.). [00129] For example, as shown in FIGURE 15, in an illustrative variation of a 1500 method for operating in a rotating mode of view of the environment, the user can activate a user input method (1510) such as on a controller manual (for example, a button or trigger or other suitable component) or any suitable device. In some variations, a hand controller (1520) can be detected when activating the user input method. The original position of the hand controller at the time of activation can be detected and stored (1522). Thereafter, as the user moves the hand controller (for example, while continuing to activate the user input method), the current position of the hand controller can be detected (1524). A vector difference between the original (or previous) position and the current position of the hand controller can be calculated (1526), and the position of the user's observation point can be adjusted (1528) based Petition 870190110594, of 10/30/2019, p. 79/139 72/86 at least partially in the calculated vector difference, thereby creating an effect that makes the user feel that he is capturing and dragging the virtual environment around. [00130] In some variations, two manual controllers (1520 ’) can be detected when activating the user input method. The original positions of the hand controllers can be detected (1522 ’), and a center point and an original vector between the original positions of the hand controllers (1523’) can be calculated and stored. Thereafter, as the user moves one or more of both hand controllers (for example, while continuing to activate the user input method), the current positions of the hand controllers can be detected (1524 ') and used to form the basis for a vector difference calculated between the original and current vectors between hand controllers (1528 '). The position and / or orientation of the user's observation point can be adjusted (1528 ’), based on the calculated vector difference. For example, the orientation or rotation of the displayed view can be rotated around the center point between the hand controller locations, thereby creating an effect that makes the user feel that he is capturing and dragging the surrounding environment. Similarly, the scale of the virtual environment display (1529 ') can be adjusted based on the difference calculated in the distance between the two hand controllers, thereby creating a feat that makes the user feel as if he is capturing and zooming in and out. displayed view of the virtual environment. [00131] Although the user modes described above are described separately, it should be understood that aspects of these modes characterize illustrative means that a user can navigate the virtual robotic surgical environment, and can be combined in a single user mode. In addition, some of these aspects can be Petition 870190110594, of 10/30/2019, p. 80/139 73/86 correctly connected. For example, an overhead observation point usually associated with flight mode can be sequentially linked with one or more portals on a trajectory. In addition, in some variations, an observation point or view displayed from the virtual environment (for example, as adjusted via one or more of the user modes above) can be linked with at least one pre-established observation point (for example, pre-established in position, orientation, and / or scale). For example, by activating user input (for example, on a hand controller, pedal, etc.), a user can reset the current observation point to a designated or predetermined observation point in the virtual environment. The user's current observation point can, for example, be gradually or smoothly animated in order to change to pre-established values of position, orientation, and / or scale. Supplementary views [00132] In some variations, an illustrative mode or modes of the system may display one or more supplementary views of additional information for a user, such as superimposed or inserted in the main first person perspective view of the virtual robotic surgical environment. For example, as shown in FIGURE 12, a transparent panel video 1210 (HUD) can provide a transparent overlay over a main first person perspective view of the virtual environment. The HUD 1210 can be switched on and off, thereby allowing the user to control whether to display the HUD 1210 at any particular time. Supplementary views of additional information, such as the one described below, can be placed on the HUD so that the user can observe the supplementary views without looking away from the main view. For example, supplementary views can adhere to HUD 1210 and move with the user’s head movement so that Petition 870190110594, of 10/30/2019, p. 81/139 74/86 the supplementary views are always in the user's field of view. As another example, supplementary views of additional information can be freely attached to the HUD 1210, since supplementary views may be smaller or at least partially hidden off the screen or in the peripheral view when the user's head is usually facing forward, but minimal or slight movement to one side can expand and / or bring one or more additional views into the user's field of view. One or more supplementary views can be arranged on HUD 1210 in a row, grid, or other suitable arrangement. In some variations, the HUD 1210 may include predetermined instantaneous points next to which supplementary views (for example, camera views, etc.) are positioned. For example, the user can select a supplementary view on HUD 1210 for closer inspection, and then replace the supplementary view on HUD 1210 by dragging it generally towards an instantaneous point, after which the supplementary view can be pulled into and fixed in the instantaneous point without having to be precisely placed in it by the user. [00133] As another example, in a virtual command station mode, one or more supplementary views can be displayed in a virtual space with one or more windows or content panels arranged in front of the user in the virtual space (for example, similar navigable menu). For example, as shown in FIGURE 13, several content windows (for example, 1210a, 1310b, 1310c, and 1310d) can be positioned in a semicircular arrangement or in another arrangement suitable for display to a user. The layout of the content windows can be adjusted by the user (for example, using manual controllers with their graphic representations 230 ’to select and drag or rotate content windows). Content windows can display, for example, a feed Petition 870190110594, of 10/30/2019, p. 82/139 75/86 endoscope video, a portal view, a virtual stadium overhead view, patient data (eg image capture), other camera or patient information views such as these described in this document, etc. By viewing several panels simultaneously, the user may be able to simultaneously monitor various aspects of the virtual operating room and / or the patient, thus allowing the user to have a comprehensive and broader science of the virtual environment. For example, the user can become aware and then respond more quickly to any adverse events in the virtual environment (for example, simulated negative reactions from the virtual patient during a simulated surgical procedure). [00134] In addition, the virtual command station mode can allow a user to select any of the content windows and become immersed in the displayed content (for example, with a first-person perspective). Such a fully immersive mode can temporarily ignore the other content windows, or can minimize (for example, being relegated to a HUD overlay on the selected immersive content). As an illustrative example, in virtual command station mode, the system can display multiple windows of content including an endoscopic camera video feed showing the inside of a virtual patient's abdomen. The user can select the video feed from the endoscopic camera to become fully immersed in the abdomen of the virtual patient (for example, while still manipulating robotic arms and instruments connected with the arms). Camera views [00135] In some variations, a user mode may allow the user to place a virtual camera at a selected observation point in the virtual environment, and a window view of the environment Petition 870190110594, of 10/30/2019, p. 83/139 76/86 virtual from the selected observation point can be displayed on the HUD so that the user can simultaneously see both his field of view in perspective in first person and the view of the camera (the view provided by the virtual camera) that he can update In real time. A virtual camera can be placed in any suitable location in the virtual environment (for example, inside or outside the patient, above the patient, above the virtual operating room, etc.). For example, as shown in FIGURE 12, the user can place a virtual camera 1220 (for example, using an object grip as described above) close to the pelvic region of a virtual patient and facing the patient's abdomen in order to provide food virtual video of the patient's abdomen. Once placed, a 1220 virtual camera can subsequently be repositioned. A view of the camera (for example, a circular insertion view, or window of any suitable format) can be placed on the HUD as a window view showing the virtual video feed from the observation point of the virtual camera 1220. Similarly, multiple virtual cameras can be placed in the virtual environment to allow multiple virtual views to be displayed on the HUD. In some variations, a predetermined array of one or more virtual cameras can be loaded, such as part of a configuration file for the virtual reality processor to incorporate within the virtual environment. [00136] In some variations, the system can offer a range of different types of virtual cameras, which can provide different types of camera views. An illustrative variation of a virtual camera is a cinema camera that is configured to provide a live virtual feed of the virtual environment (for example, 1212 cinema camera view in FIGURE 12). Another illustrative variation of a virtual camera is an endoscopic camera Petition 870190110594, of 10/30/2019, p. 84/139 77/86 that is connected with a virtual endoscope to be placed on a virtual patient. In this variation, the user can, for example, virtually perform a technique to introduce the virtual endoscopic camera into the virtual patient and subsequently monitor the internal workspace inside the patient by viewing the endoscopic video feed (for example, endoscopic camera view) 1214 in FIGURE 12). In another illustrative variation, the virtual camera can be a wide-angle camera (for example, 360 degrees, panoramic, etc.) that is configured to provide a greater field of view of the virtual environment. In this variation, the view from the window camera can, for example, be displayed with a fisheye or generally spherical video. [00137] Various aspects of the camera view can be adjusted by the user. For example, the user can adjust the location, size, scale factor, etc. from the camera view (for example, similar to portal settings as described above). As another example, the user can select one or more filters or other special image effects to be applied to the camera view. Illustrative filters include filters that highlight particular anatomical components (eg, tumors) or tissue characteristics (eg, perfusion) of the virtual patient. In some variations, one or more virtual cameras can be deselected or turned off (for example, have the virtual camera and / or the associated camera view selectively hidden) or deleted, such as if the virtual camera or its associated camera view obstructing the user's view of the virtual environment behind the virtual camera or the camera view. [00138] In some variations, a camera view can work similarly to a portal (described above) to allow the user to quickly navigate around the virtual environment. For example, with reference to FIGURE 12, a user can select a view of Petition 870190110594, of 10/30/2019, p. 85/139 78/86 camera 1212 (for example, highlighting or capturing and pushing camera view 1212 towards itself) to be transported to the viewing point of camera view 1212. Views of patient data, etc. [00139] In some variations, a user mode may allow the display of patient data and other information on the HUD or another suitable location on the video. For example, patient image information (for example, ultrasound, X-ray, MRI, etc.) can be displayed in a supplemental display, superimposed on the patient (for example, as simulated augmented reality). A user can, for example, view images of the patient as a reference while interacting with the virtual patient. As another example, the patient's vital signs (eg, heartbeat, blood pressure, etc.) can be displayed to the user in a supplementary view. [00140] In another variation, a user mode may allow the display of other appropriate information, such as training videos (for example, illustrative surgical procedures recorded from a previous procedure), video feed from a mentor surgeon or instructor, etc. providing guidance to a user. Virtual Reality System Applications [00141] Generally, the virtual reality system can be used in any suitable scenario in which it is useful to simulate or replicate a robotic surgical environment. In some variations, the virtual reality system can be used for training purposes, such as allowing a surgeon to practice control of a robotic surgical system and / or practice the execution of a particular type of minimally invasive surgical procedure using a surgical system. robotic. The virtual reality system can allow the Petition 870190110594, of 10/30/2019, p. 86/139 79/86 a user better understand the movements of the robotic surgical system in response to the user's commands, both inside and outside the patient. For example, a user can use a head-mounted video under the supervision of a mentor or instructor who can view the virtual reality environment next to the user (for example, via a second head-mounted video, via an external video, etc.) and guide the user through the operations of a virtual robotic surgical system within the virtual reality environment. As another example, a user can use a head-mounted video and can see, as shown in the immersive video (for example, in a content window, on the HUD, etc.) a video related to training such as recording a procedure previously performed surgical procedure. [00142] As another example, the virtual reality system can be used for the purpose of surgical planning. For example, a user can operate the virtual reality system to plan the surgical workflow. Configuration files for virtual objects (eg robotic surgical system including arm and tool controllers, user console, actuators, other equipment, patient bed, patient, staff, etc.) can be loaded in a virtual robotic surgical environment such as representative of real objects that will be in the real operating room (that is, not virtual, or true). Within the virtual reality environment, the user can adjust components of the virtual operating room, such as positioning the user's console, the patient's bed, and other equipment relative to each other in a desired arrangement. The user can additionally or alternatively use the virtual reality system to plan aspects of the robotic surgical system, such as selecting the number and location of openings for the entry of surgical instruments, or determining the ideal number and positioning / orientations Petition 870190110594, of 10/30/2019, p. 87/139 80/86 tion (eg, mounting location, arm position, etc.) of robotic arms for a procedure, such as to minimize potential collisions between system components during the surgical procedure. Such virtual arrangements can be based, for example, on previous configurations, by trial and error, for similar surgical procedures and / or similar patient, etc. In some variations, the system may additionally or alternatively propose selected virtual arrangements based on machine learning techniques applied to data sets of previously performed surgical procedures for various types of patients. [00143] As yet another example, the virtual reality system can be used for the purpose of R&D (for example, simulations). For example, a method for designing a robotic surgical system may include generating a virtual model of a robotic surgical system, testing the virtual model of the robotic surgical system in a virtual operating room environment, modifying the virtual model of the robotic surgical system based on the test, and build the robotic surgical system based on the modified virtual model. Aspects of the virtual model of the robotic surgical system that can be tested in the virtual operating room environment include physical characteristics of one or more components of the robotic surgical system (for example, diameter or length of arm connections). For example, a virtual model of a particular project can be tested with respect to the movements of the particular arm, surgical procedures, etc. (for example, test for the likelihood of a collision between the robotic arm and other objects). Consequently, a design of a robotic arm (or similarly, of any other component of the robotic surgical system) can be at least initially tested by testing a virtual implementation of the project, rather than testing a prototype Petition 870190110594, of 10/30/2019, p. 88/139 81/86 physical, thereby accelerating the R&D cycle and reducing costs. [00144] Other aspects that can be tested include functionality of one or more components of the robotic surgical system (for example, control modes of a control system). For example, as described above, a virtual operating environment application can pass condition information to a kinematics application, and the kinematics application can generate and pass commands based on control algorithms, which the virtual reality processor can use. the commands to cause changes in the virtual robotic surgical environment (for example, moving a virtual robotic arm in a particular way according to relevant control algorithms). Thus, software control algorithms can be incorporated into a virtual robotic system for testing, refinement, etc., without requiring a physical prototype of the relevant robotic component, thereby conserving R&D resources and accelerating the R&D cycle. [00145] In another example, the virtual reality system can be used to allow multiple surgeons to collaborate in the same virtual reality environment. For example, several users can use head-mounted videos and interact with each other (and with the same virtual robotic system, with the same virtual patient, etc.) in the virtual reality environment. Users can be physically in the same room or general location, or they can be remote from each other. For example, a user may be remotely instructing others as they collaborate to perform a surgical procedure on the virtual patient. [00146] Specific illustrative applications of the virtual reality system are described below in additional details. However, it should be understood that the applications of the virtual reality system are not limited to those examples and general application scenarios described in this document. Petition 870190110594, of 10/30/2019, p. 89/139 82/86 Example 1 - on the bed [00147] A user can use the virtual reality system to simulate a scenario on the bed in which it is adjacent to a patient's bed or table and operates both a robotic surgical system and a manual laparoscopic tool. Such simulation can be useful for training, surgical planning, etc. For example, the user can staple tissue to a target segment of a virtual patient's intestine using either a virtual robotic tool or a virtual hand tool. [00148] In this example, the user uses a video mounted on the head providing an immersive view of a virtual reality environment, and can use manual controllers to navigate within the virtual reality environment to be adjacent to a virtual patient table on which a virtual patient is located. A proximal end of a virtual robotic arm is connected to the virtual patient table, and a distal end of the virtual robotic arm supports a virtual tool controller acting on the virtual forceps that are positioned inside the virtual patient's abdomen. A virtual laparoscopic stapling tool is passed through a virtual cannula and having a distal end positioned inside the virtual patient's abdomen. Additionally, an endoscopic camera is positioned inside the virtual patient's abdomen, and provides a camera feed presenting the surgical workspace inside the virtual patient's abdomen (including patient tissue, robotically controlled forceps, and a manual laparoscopic stapling tool). [00149] The user continues to view the virtual environment through immersive video in the head-mounted video, as well as the virtual endoscopic camera power displayed in a window view on a transparent panel video superimposed on the field of view of the Petition 870190110594, of 10/30/2019, p. 90/139 83/86 user. The user holds a hand control in one hand that is configured to control the robot-powered virtual forceps. The user will follow on the other hand a laparoscopic hand controller that is configured to control the virtual manual laparoscopic stapler tool, with the laparoscopic hand controller passing through a cannula mounted on the simulated patient's body made of foam. The laparoscopic hand controller is calibrated to match the virtual manual laparoscopic stapler tool. The user manipulates the hand controller to operate the robotically controlled forceps to manipulate the virtual patient's intestine and discover a target segment of the intestine. With the target segment of the intestine exposed and accessible, the user manipulates the laparoscopic hand controller to apply virtual staples to the target segment via the virtual manual laparoscopic stapling tool. Example 2 - collision resolution from the user's console [00150] When using the virtual reality system, a user may wish to resolve collisions between virtual components of the virtual robotic surgical system, even though, however, the user cannot be adjacent to the virtual components colliding (for example, the user may be seated at a distance from the virtual patient, such as on a virtual user console). In this example, the user uses a video mounted on the head providing an immersive view provided by a virtual endoscope placed inside the abdomen of a virtual patient. The proximal ends of two virtual robotic arms are connected with separate locations on a virtual patient table, on which the virtual patient is located. The distal ends of the virtual robotic arms support the respective tool controllers acting on virtual forceps that are positioned inside the abdomen of the virtual patient. The user manipulates the manual controllers to operate both forms Petition 870190110594, of 10/30/2019, p. 91/139 84/86 robotically controlled virtual ceps, which manipulate the virtual tissue within the virtual patient. This movement can cause a collision involving at least one of the virtual robotic arms (for example, a virtual optical Rob arm can be positioned to create a collision with itself, the virtual robotic arms can be positioned in order to create a collision with the patient or with nearby obstacles, etc.). [00151] The virtual reality system detects the collision based on the condition information of the virtual robotic arms, and alerts the user with respect to the collision. The system displays a view from above or another suitable view from an appropriate observation point of the virtual robotic surgical system, such as in a window view (for example, image view within image). The collision location is highlighted in the displayed window view, such as outlining the affected colliding components with red or another contrast color. Alternatively, the user can detect the collision himself by monitoring a video camera feed from a virtual camera placed on top of the virtual patient's table. [00152] Upon becoming aware of the collision, the user can enlarge or scale their immersive view of the virtual reality environment. The user can employ an arm repositioning control mode that locks the position and orientation of the virtual forceps inside the patient. Using hand controllers in a user mode taking object, the user can capture the virtual contact points in the virtual robotic arms and reposition the virtual robotic arms in order to resolve the collision while the control mode maintains the position and orientation of the forceps during the repositioning of the arm. Once the virtual robotic arms are repositioned so that the collision is resolved, the user can approach to the previous observation point, exit the reposition control mode Petition 870190110594, of 10/30/2019, p. 92/139 85/86 arm, and restart the use of manual controllers to operate the virtual forceps within the virtual patient. Example 3 - Coordinated reallocation of various surgical instruments from the user's console [00153] When using the virtual reality system, a user may find it useful to remain substantially in an endoscopic view and move several virtual surgical instruments (for example, actuators, cameras) as a group rather than individually within the virtual patient, from that saving time, as well as making it easier for the user to maintain contextual knowledge of the instruments in relation to the anatomy of the virtual patient. In this example, the user uses a video mounted on the head providing an immersive view provided by a virtual endoscope placed inside the abdomen of a virtual patient. The proximal ends of two virtual robotic arms are connected with separate locations on a virtual patient table, on which the virtual patient is located. The distal ends of the virtual robotic arm support the respective tool controllers acting on virtual forceps that are positioned in the pelvic area of the virtual patient. The user can manipulate manual controllers to operate the virtual forceps. [00154] The user may wish to move the virtual endoscope and virtual forceps to another target region of the virtual patient's abdomen, such as the spleen. Instead of moving each surgical instrument individually, the user can employ a coordinated relocation mode. Once this mode is used, the endoscopic camera view extends the geometric axis of the endoscope to a distance sufficient to allow the user to see the new target region (spleen). A spherical indicator is displayed at the distal end of the endoscope, which encapsulates the distal end of the virtual endoscope and the Petition 870190110594, of 10/30/2019, p. 93/139 86/86 distal tremors of the virtual forceps. The user manipulates at least one hand controller to remove the virtual endoscope and virtual forceps from the workspace (for example, until the user can see the distal end of the virtual cannula in the virtual endoscope view), and then captures and moves the spherical indicator from the pelvic area to the spleen. Once the user finishes the new target region by moving the spherical indicator to the new target region, the virtual endoscope and virtual forceps automatically move to the new target region and the view from the endoscopic camera zooms in to present the new target region. Throughout this movement on a relatively large scale, the user visualizes the virtual environment with a substantially endoscopic view of the virtual environment, thereby allowing the user to maintain awareness of the anatomy of the virtual patient instead of shifting his focus between the instrument and the anatomy. [00155] The preceding description, for the purpose of explanation, used specific nomenclature to provide a comprehensive understanding of the invention. However, it will be apparent to those skilled in the art that specific details are not required in order to practice the invention. Thus, the foregoing descriptions of specific embodiments of the invention are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed; obviously, several modifications and variations are possible according to the instructions above. The modalities have been chosen and described in order to better explain the principles of the invention and its practical applications, and they thus enable those skilled in the art to better use the invention and the various modalities with various modifications as they are suitable for the particular use. contemplated. The following claims and their equivalents are intended to define the scope of the invention.
权利要求:
Claims (15) [1] 1. Virtual reality system to visualize virtual robotic surgery, characterized by the fact that it comprises: a processor configured to generate a virtual operating room comprising one or more virtual robotic arms mounted on a virtual operating table, one or more virtual surgical instruments, each coupled with a distal end of a virtual robotic arm, and a virtual patient in the top of the virtual operating table; and a manual device communicatively coupled with the processor, in which the manual device is configured to manipulate virtual robotic arms and virtual surgical instruments to perform virtual surgery with the virtual patient; where the processor is configured to synthesize virtual surgery with the virtual patient in the virtual operating room in a video. [2] 2. System, according to claim 1, characterized by the fact that generating the virtual operating room is based on predetermined models for the virtual operating room, for the virtual robotic arms, for the virtual operating table, for the instruments virtual surgical procedures, and for the virtual patient. [3] 3. System, according to claim 2, characterized by the fact that each of the one or more virtual surgical instruments passes through a virtual cannula and has a distal end positioned inside the abdomen of the virtual patient. [4] 4. System, according to claim 3, characterized by the fact that the manual device is configured to select a number and locations of openings for entry of the virtual surgical instruments, and to determine a number and positions and orientation of the virtual robotic arms for virtual surgery. Petition 870190110594, of 10/30/2019, p. 123/139 2/4 [5] 5. System, according to claim 1, characterized by the fact that the manual device is configured to create a portal in a location in the virtual operating room, the portal allowing quick navigation to the location when selecting the portal. [6] 6. System, according to claim 5, characterized by the fact that the portal is positioned inside or outside the virtual patient. [7] 7. System, according to claim 1, characterized by the fact that virtual surgical instruments comprise a virtual endoscope having a virtual camera positioned inside the virtual patient's abdomen and providing a view of a surgical workspace inside the patient's abdomen virtual. [8] 8. System, according to claim 7, characterized by the fact that the processor is configured to synthesize a view of the surgical workspace from the virtual endoscope in the video. [9] 9. System, according to claim 7, characterized by the fact that the manual device is configured to move the virtual endoscope and other virtual surgical instruments in a coordinated way to another region of the virtual patient's abdomen in a coordinated relocation mode. [10] 10. System, according to claim 9, characterized by the fact that in the coordinated relocation mode, the virtual camera expands a geometric axis of the virtual endoscope to include the other regions of the abdomen in the view of the surgical workspace. [11] 11. Method to facilitate navigation of a virtual robotic surgical environment, the method characterized by the fact that it comprises: display a first-person perspective view of the virtual robotic surgical environment from a first point of view Petition 870190110594, of 10/30/2019, p. 124/139 3/4 vation in the virtual robotic surgical environment; displaying a first window view of the virtual robotic surgical environment from a second observation point, in which the first window view is displayed in a first region of the first person perspective view displayed; displaying a second window view of the virtual robotic surgical environment from a third observation point, where the second window view is displayed in a second region of the first person perspective view shown; and in response to a user input associating the first and second window views, sequentially turn on the first and second window views to generate a trajectory between the second and third observation points. [12] 12. Method according to claim 11, characterized by the fact that at least one of the first and second window views of the virtual robotic surgical environment is displayed at a different scale factor than the perspective view. [13] 13. Method, according to claim 11, characterized by the fact that at least one of the first or the second observation points is located inside a virtual patient. [14] 14. Method, according to claim 11, characterized by the fact that it still comprises: receive input from the user indicating placement of a virtual camera in a fourth observation point other than the first observation point; generate a perspective view of the virtual camera of the virtual robotic surgical environment from the second observation point; and display the perspective view of the virtual camera in a region of the first person perspective view. Petition 870190110594, of 10/30/2019, p. 125/139 4/4 [15] 15. Method, according to claim 14, characterized by the fact that the virtual camera is one of a virtual endoscopic camera placed inside a virtual patient or a virtual video camera placed outside a virtual patient.
类似技术:
公开号 | 公开日 | 专利标题 US11011077B2|2021-05-18|Virtual reality training, simulation, and collaboration in a robotic surgical system US11013559B2|2021-05-25|Virtual reality laparoscopic tools US20190000578A1|2019-01-03|Emulation of robotic arms and control thereof in a virtual reality environment US11270601B2|2022-03-08|Virtual reality system for simulating a robotic surgical environment JP2020106844A|2020-07-09|Simulator system for medical procedure training CN108701429B|2021-12-21|Method, system, and storage medium for training a user of a robotic surgical system Bihlmaier et al.2016|Learning dynamic spatial relations Morimoto et al.2016|Surgeon design interface for patient-specific concentric tube robots Zinchenko et al.2017|Virtual reality control of a robotic camera holder for minimally invasive surgery Zhang et al.2014|Direct manipulation of tool‐like masters for controlling a master–slave surgical robotic system US20210307831A1|2021-10-07|Mobile virtual reality system for surgical robotic systems KR102366023B1|2022-02-23|Simulator system for medical procedure training KR20220025286A|2022-03-03|Simulator system for medical procedure training
同族专利:
公开号 | 公开日 EP3602567A1|2020-02-05| JP2020520761A|2020-07-16| AU2018294236A1|2019-11-14| US11011077B2|2021-05-18| CN109791801A|2019-05-21| US20210304637A1|2021-09-30| CA3061333A1|2019-01-03| KR20200011970A|2020-02-04| US20190005848A1|2019-01-03| WO2019006202A1|2019-01-03| AU2018294236B2|2021-09-30|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US8527094B2|1998-11-20|2013-09-03|Intuitive Surgical Operations, Inc.|Multi-user medical robotic system for collaboration or training in minimally invasive surgical procedures| US8600551B2|1998-11-20|2013-12-03|Intuitive Surgical Operations, Inc.|Medical robotic system with operatively couplable simulator unit for surgeon training| US7607440B2|2001-06-07|2009-10-27|Intuitive Surgical, Inc.|Methods and apparatus for surgical planning| US8010180B2|2002-03-06|2011-08-30|Mako Surgical Corp.|Haptic guidance system and method| US7317955B2|2003-12-12|2008-01-08|Conmed Corporation|Virtual operating room integration| US20070275359A1|2004-06-22|2007-11-29|Rotnes Jan S|Kit, operating element and haptic device for use in surgical simulation systems| US8414475B2|2005-04-18|2013-04-09|M.S.T. Medical Surgery Technologies Ltd|Camera holder device and method thereof| KR101298492B1|2005-06-30|2013-08-21|인튜어티브 서지컬 인코포레이티드|Indicator for tool state and communication in multiarm robotic telesurgery| US9789608B2|2006-06-29|2017-10-17|Intuitive Surgical Operations, Inc.|Synthetic representation of a surgical robot| US8073528B2|2007-09-30|2011-12-06|Intuitive Surgical Operations, Inc.|Tool tracking systems, methods and computer products for image guided surgery| DE102008013495A1|2008-03-10|2009-09-24|Polydimensions Gmbh|Haptic impression producing device for laparoscopic surgical simulation, has upper and lower joints fastened at pivot and working points, where Cartesian movements of kinematic device are transformed into rotatory and translatory movements| US8830224B2|2008-12-31|2014-09-09|Intuitive Surgical Operations, Inc.|Efficient 3-D telestration for local robotic proctoring| US8521331B2|2009-11-13|2013-08-27|Intuitive Surgical Operations, Inc.|Patient-side surgeon interface for a minimally invasive, teleoperated surgical instrument| US10408613B2|2013-07-12|2019-09-10|Magic Leap, Inc.|Method and system for rendering virtual content| WO2014151621A1|2013-03-15|2014-09-25|Sri International|Hyperdexterous surgical system| KR102154521B1|2013-03-15|2020-09-10|인튜어티브 서지컬 오퍼레이션즈 인코포레이티드|System and methods for positioning a manipulator arm by clutching within a null-perpendicular space concurrent with null-space movement| CN112201131A|2013-12-20|2021-01-08|直观外科手术操作公司|Simulator system for medical procedure training| KR20170083091A|2014-11-13|2017-07-17|인튜어티브 서지컬 오퍼레이션즈 인코포레이티드|Integrated user environments| US20160314717A1|2015-04-27|2016-10-27|KindHeart, Inc.|Telerobotic surgery system for remote surgeon training using robotic surgery station coupled to remote surgeon trainee and instructor stations and associated methods| US20170076016A1|2015-09-10|2017-03-16|Maysam MIR AHMADI|Automated layout generation| US9861446B2|2016-03-12|2018-01-09|Philipp K. Lang|Devices and methods for surgery| US10799294B2|2016-06-13|2020-10-13|Synaptive Medical Inc.|Virtual operating room layout planning and analysis tool| US10888399B2|2016-12-16|2021-01-12|Align Technology, Inc.|Augmented reality enhancements for dental practitioners| US10610303B2|2017-06-29|2020-04-07|Verb Surgical Inc.|Virtual reality laparoscopic tools|US10319109B2|2017-03-31|2019-06-11|Honda Motor Co., Ltd.|Interaction with physical objects as proxy objects representing virtual objects| US11116587B2|2018-08-13|2021-09-14|Theator inc.|Timeline overlay on surgical video| CN109662779B|2019-01-25|2021-06-18|李汉忠|Transurethral resectoscope surgical robot system| US11151789B1|2019-03-25|2021-10-19|Kentucky Imaging Technologies|Fly-in visualization for virtual colonoscopy| US20200367977A1|2019-05-21|2020-11-26|Verb Surgical Inc.|Proximity sensors for surgical robotic arm manipulation| CN110335516B|2019-06-27|2021-06-25|王寅|Method for performing VR cardiac surgery simulation by adopting VR cardiac surgery simulation system| CN110376922A|2019-07-23|2019-10-25|广东工业大学|Operating room scenario simulation system| WO2021086417A1|2019-10-29|2021-05-06|Verb Surgical Inc.|Virtual reality systems for simulating surgical workflow with patient model and customizable operation room| US10705597B1|2019-12-17|2020-07-07|Liteboxer Technologies, Inc.|Interactive exercise and training system and method| WO2021126447A1|2019-12-19|2021-06-24|Covidien Lp|Systems and methods for mitigating collision of a robotic system| WO2021124716A1|2019-12-19|2021-06-24|Sony Group Corporation|Method, apparatus and system for controlling an image capture device during surgery| WO2021141887A1|2020-01-06|2021-07-15|The Johns Hopkins University|Overlaying augmented realitycontent within an ar headset coupled to a magnifying loupe| GB2593473A|2020-03-23|2021-09-29|Cmr Surgical Ltd|Virtual console for controlling a surgical robot| US20210307831A1|2020-04-03|2021-10-07|Verb Surgical Inc.|Mobile virtual reality system for surgical robotic systems| US20210307864A1|2020-04-05|2021-10-07|Theator inc.|Automated assessment of surgical competency from video analyses| US20210350624A1|2020-05-08|2021-11-11|Covidien Lp|Systems and methods of controlling an operating room display using an augmented reality headset| CN111513854A|2020-05-11|2020-08-11|绍兴梅奥心磁医疗科技有限公司|Catheter manipulation device and system| US20210378768A1|2020-06-05|2021-12-09|Verb Surgical Inc.|Remote surgical mentoring| EP3944734A1|2020-06-18|2022-02-02|Brainlab AG|Compensation of gravity-related displacements of medical carrier structures| US10956868B1|2020-06-29|2021-03-23|5th Kind LLC|Virtual reality collaborative workspace that is dynamically generated from a digital asset management workflow|
法律状态:
2021-10-05| B11A| Dismissal acc. art.33 of ipl - examination not requested within 36 months of filing| 2021-10-19| B350| Update of information on the portal [chapter 15.35 patent gazette]| 2021-12-21| B11Y| Definitive dismissal - extension of time limit for request of examination expired [chapter 11.1.1 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201762526919P| true| 2017-06-29|2017-06-29| US16/019,132|US11011077B2|2017-06-29|2018-06-26|Virtual reality training, simulation, and collaboration in a robotic surgical system| PCT/US2018/040138|WO2019006202A1|2017-06-29|2018-06-28|Virtual reality training, simulation, and collaboration in a robotic surgical system| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|